00:00:00.000 Started by upstream project "autotest-per-patch" build number 132168 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.035 Fetching changes from the remote Git repository 00:00:00.037 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.059 Using shallow fetch with depth 1 00:00:00.059 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.059 > git --version # timeout=10 00:00:00.089 > git --version # 'git version 2.39.2' 00:00:00.089 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.122 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.122 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.304 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.317 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.334 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.334 > git config core.sparsecheckout # timeout=10 00:00:04.349 > git read-tree -mu HEAD # timeout=10 00:00:04.368 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.395 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.395 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.522 [Pipeline] Start of Pipeline 00:00:04.535 [Pipeline] library 00:00:04.536 Loading library shm_lib@master 00:00:04.537 Library shm_lib@master is cached. Copying from home. 00:00:04.550 [Pipeline] node 00:00:19.552 Still waiting to schedule task 00:00:19.553 Waiting for next available executor on ‘vagrant-vm-host’ 00:12:45.453 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:12:45.456 [Pipeline] { 00:12:45.471 [Pipeline] catchError 00:12:45.473 [Pipeline] { 00:12:45.488 [Pipeline] wrap 00:12:45.497 [Pipeline] { 00:12:45.505 [Pipeline] stage 00:12:45.507 [Pipeline] { (Prologue) 00:12:45.526 [Pipeline] echo 00:12:45.528 Node: VM-host-SM38 00:12:45.535 [Pipeline] cleanWs 00:12:45.544 [WS-CLEANUP] Deleting project workspace... 00:12:45.544 [WS-CLEANUP] Deferred wipeout is used... 00:12:45.550 [WS-CLEANUP] done 00:12:45.741 [Pipeline] setCustomBuildProperty 00:12:45.837 [Pipeline] httpRequest 00:12:46.235 [Pipeline] echo 00:12:46.237 Sorcerer 10.211.164.101 is alive 00:12:46.247 [Pipeline] retry 00:12:46.249 [Pipeline] { 00:12:46.264 [Pipeline] httpRequest 00:12:46.268 HttpMethod: GET 00:12:46.269 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:12:46.269 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:12:46.270 Response Code: HTTP/1.1 200 OK 00:12:46.271 Success: Status code 200 is in the accepted range: 200,404 00:12:46.271 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:12:46.430 [Pipeline] } 00:12:46.449 [Pipeline] // retry 00:12:46.457 [Pipeline] sh 00:12:46.735 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:12:46.751 [Pipeline] httpRequest 00:12:47.149 [Pipeline] echo 00:12:47.152 Sorcerer 10.211.164.101 is alive 00:12:47.165 [Pipeline] retry 00:12:47.167 [Pipeline] { 00:12:47.188 [Pipeline] httpRequest 00:12:47.193 HttpMethod: GET 00:12:47.194 URL: http://10.211.164.101/packages/spdk_5b0ad6d606b40a6e2e6d2ffa23dc11cdcb1c081d.tar.gz 00:12:47.194 Sending request to url: http://10.211.164.101/packages/spdk_5b0ad6d606b40a6e2e6d2ffa23dc11cdcb1c081d.tar.gz 00:12:47.195 Response Code: HTTP/1.1 200 OK 00:12:47.195 Success: Status code 200 is in the accepted range: 200,404 00:12:47.196 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_5b0ad6d606b40a6e2e6d2ffa23dc11cdcb1c081d.tar.gz 00:12:49.870 [Pipeline] } 00:12:49.887 [Pipeline] // retry 00:12:49.895 [Pipeline] sh 00:12:50.174 + tar --no-same-owner -xf spdk_5b0ad6d606b40a6e2e6d2ffa23dc11cdcb1c081d.tar.gz 00:12:53.476 [Pipeline] sh 00:12:53.753 + git -C spdk log --oneline -n5 00:12:53.753 5b0ad6d60 test/nvme/xnvme: Drop null_blk 00:12:53.753 99b414ab0 test/nvme/xnvme: Tidy the test suite 00:12:53.753 5972a9e6e test/nvme/xnvme: Add io_uring_cmd 00:12:53.753 0b6f1f9f0 test/nvme/xnvme: Add different io patterns 00:12:53.753 5ba0cf90d test/nvme/xnvme: Add simple RPC validation test 00:12:53.773 [Pipeline] writeFile 00:12:53.790 [Pipeline] sh 00:12:54.068 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:54.079 [Pipeline] sh 00:12:54.357 + cat autorun-spdk.conf 00:12:54.357 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:54.357 SPDK_RUN_ASAN=1 00:12:54.357 SPDK_RUN_UBSAN=1 00:12:54.357 SPDK_TEST_RAID=1 00:12:54.357 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:54.364 RUN_NIGHTLY=0 00:12:54.366 [Pipeline] } 00:12:54.380 [Pipeline] // stage 00:12:54.397 [Pipeline] stage 00:12:54.399 [Pipeline] { (Run VM) 00:12:54.412 [Pipeline] sh 00:12:54.722 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:54.722 + echo 'Start stage prepare_nvme.sh' 00:12:54.722 Start stage prepare_nvme.sh 00:12:54.722 + [[ -n 6 ]] 00:12:54.722 + disk_prefix=ex6 00:12:54.722 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:12:54.722 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:12:54.722 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:12:54.722 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:54.722 ++ SPDK_RUN_ASAN=1 00:12:54.722 ++ SPDK_RUN_UBSAN=1 00:12:54.722 ++ SPDK_TEST_RAID=1 00:12:54.722 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:54.722 ++ RUN_NIGHTLY=0 00:12:54.722 + cd /var/jenkins/workspace/raid-vg-autotest 00:12:54.722 + nvme_files=() 00:12:54.722 + declare -A nvme_files 00:12:54.722 + backend_dir=/var/lib/libvirt/images/backends 00:12:54.722 + nvme_files['nvme.img']=5G 00:12:54.722 + nvme_files['nvme-cmb.img']=5G 00:12:54.722 + nvme_files['nvme-multi0.img']=4G 00:12:54.722 + nvme_files['nvme-multi1.img']=4G 00:12:54.722 + nvme_files['nvme-multi2.img']=4G 00:12:54.722 + nvme_files['nvme-openstack.img']=8G 00:12:54.722 + nvme_files['nvme-zns.img']=5G 00:12:54.722 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:54.722 + (( SPDK_TEST_FTL == 1 )) 00:12:54.722 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:54.722 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:54.722 + for nvme in "${!nvme_files[@]}" 00:12:54.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:12:54.722 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:54.722 + for nvme in "${!nvme_files[@]}" 00:12:54.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:12:54.722 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:54.723 + for nvme in "${!nvme_files[@]}" 00:12:54.723 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:12:54.723 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:54.723 + for nvme in "${!nvme_files[@]}" 00:12:54.723 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:12:54.723 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:54.723 + for nvme in "${!nvme_files[@]}" 00:12:54.723 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:12:54.723 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:54.723 + for nvme in "${!nvme_files[@]}" 00:12:54.723 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:12:54.723 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:54.723 + for nvme in "${!nvme_files[@]}" 00:12:54.723 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:12:54.981 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:54.981 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:12:54.981 + echo 'End stage prepare_nvme.sh' 00:12:54.981 End stage prepare_nvme.sh 00:12:54.993 [Pipeline] sh 00:12:55.274 + DISTRO=fedora39 00:12:55.274 + CPUS=10 00:12:55.274 + RAM=12288 00:12:55.274 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:55.274 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:12:55.274 00:12:55.274 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:12:55.274 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:12:55.274 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:12:55.274 HELP=0 00:12:55.274 DRY_RUN=0 00:12:55.274 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:12:55.274 NVME_DISKS_TYPE=nvme,nvme, 00:12:55.274 NVME_AUTO_CREATE=0 00:12:55.274 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:12:55.274 NVME_CMB=,, 00:12:55.274 NVME_PMR=,, 00:12:55.274 NVME_ZNS=,, 00:12:55.274 NVME_MS=,, 00:12:55.274 NVME_FDP=,, 00:12:55.274 SPDK_VAGRANT_DISTRO=fedora39 00:12:55.274 SPDK_VAGRANT_VMCPU=10 00:12:55.274 SPDK_VAGRANT_VMRAM=12288 00:12:55.274 SPDK_VAGRANT_PROVIDER=libvirt 00:12:55.274 SPDK_VAGRANT_HTTP_PROXY= 00:12:55.274 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:55.274 SPDK_OPENSTACK_NETWORK=0 00:12:55.274 VAGRANT_PACKAGE_BOX=0 00:12:55.274 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:12:55.274 FORCE_DISTRO=true 00:12:55.274 VAGRANT_BOX_VERSION= 00:12:55.274 EXTRA_VAGRANTFILES= 00:12:55.274 NIC_MODEL=e1000 00:12:55.274 00:12:55.274 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:12:55.274 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:12:57.802 Bringing machine 'default' up with 'libvirt' provider... 00:12:58.060 ==> default: Creating image (snapshot of base box volume). 00:12:58.369 ==> default: Creating domain with the following settings... 00:12:58.369 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731085234_079d693f014530a24d12 00:12:58.369 ==> default: -- Domain type: kvm 00:12:58.369 ==> default: -- Cpus: 10 00:12:58.369 ==> default: -- Feature: acpi 00:12:58.369 ==> default: -- Feature: apic 00:12:58.369 ==> default: -- Feature: pae 00:12:58.369 ==> default: -- Memory: 12288M 00:12:58.369 ==> default: -- Memory Backing: hugepages: 00:12:58.369 ==> default: -- Management MAC: 00:12:58.369 ==> default: -- Loader: 00:12:58.369 ==> default: -- Nvram: 00:12:58.369 ==> default: -- Base box: spdk/fedora39 00:12:58.369 ==> default: -- Storage pool: default 00:12:58.369 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731085234_079d693f014530a24d12.img (20G) 00:12:58.369 ==> default: -- Volume Cache: default 00:12:58.369 ==> default: -- Kernel: 00:12:58.369 ==> default: -- Initrd: 00:12:58.369 ==> default: -- Graphics Type: vnc 00:12:58.369 ==> default: -- Graphics Port: -1 00:12:58.369 ==> default: -- Graphics IP: 127.0.0.1 00:12:58.369 ==> default: -- Graphics Password: Not defined 00:12:58.369 ==> default: -- Video Type: cirrus 00:12:58.369 ==> default: -- Video VRAM: 9216 00:12:58.369 ==> default: -- Sound Type: 00:12:58.369 ==> default: -- Keymap: en-us 00:12:58.369 ==> default: -- TPM Path: 00:12:58.369 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:58.369 ==> default: -- Command line args: 00:12:58.369 ==> default: -> value=-device, 00:12:58.369 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:12:58.369 ==> default: -> value=-drive, 00:12:58.369 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:12:58.369 ==> default: -> value=-device, 00:12:58.369 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:58.369 ==> default: -> value=-device, 00:12:58.369 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:12:58.369 ==> default: -> value=-drive, 00:12:58.369 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:12:58.369 ==> default: -> value=-device, 00:12:58.369 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:58.369 ==> default: -> value=-drive, 00:12:58.369 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:12:58.369 ==> default: -> value=-device, 00:12:58.369 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:58.369 ==> default: -> value=-drive, 00:12:58.369 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:12:58.369 ==> default: -> value=-device, 00:12:58.369 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:58.369 ==> default: Creating shared folders metadata... 00:12:58.369 ==> default: Starting domain. 00:12:59.743 ==> default: Waiting for domain to get an IP address... 00:13:17.824 ==> default: Waiting for SSH to become available... 00:13:18.761 ==> default: Configuring and enabling network interfaces... 00:13:22.946 default: SSH address: 192.168.121.232:22 00:13:22.946 default: SSH username: vagrant 00:13:22.946 default: SSH auth method: private key 00:13:24.337 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:13:34.300 ==> default: Mounting SSHFS shared folder... 00:13:35.246 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:13:35.246 ==> default: Checking Mount.. 00:13:36.630 ==> default: Folder Successfully Mounted! 00:13:36.630 00:13:36.630 SUCCESS! 00:13:36.630 00:13:36.630 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:13:36.630 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:13:36.630 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:13:36.630 00:13:36.638 [Pipeline] } 00:13:36.654 [Pipeline] // stage 00:13:36.664 [Pipeline] dir 00:13:36.665 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:13:36.666 [Pipeline] { 00:13:36.680 [Pipeline] catchError 00:13:36.682 [Pipeline] { 00:13:36.696 [Pipeline] sh 00:13:36.971 + vagrant ssh-config --host vagrant 00:13:36.971 + sed -ne '/^Host/,$p' 00:13:36.971 + tee ssh_conf 00:13:39.515 Host vagrant 00:13:39.515 HostName 192.168.121.232 00:13:39.515 User vagrant 00:13:39.515 Port 22 00:13:39.515 UserKnownHostsFile /dev/null 00:13:39.515 StrictHostKeyChecking no 00:13:39.515 PasswordAuthentication no 00:13:39.515 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:13:39.515 IdentitiesOnly yes 00:13:39.515 LogLevel FATAL 00:13:39.515 ForwardAgent yes 00:13:39.515 ForwardX11 yes 00:13:39.515 00:13:39.528 [Pipeline] withEnv 00:13:39.531 [Pipeline] { 00:13:39.545 [Pipeline] sh 00:13:39.842 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:13:39.843 source /etc/os-release 00:13:39.843 [[ -e /image.version ]] && img=$(< /image.version) 00:13:39.843 # Minimal, systemd-like check. 00:13:39.843 if [[ -e /.dockerenv ]]; then 00:13:39.843 # Clear garbage from the node'\''s name: 00:13:39.843 # agt-er_autotest_547-896 -> autotest_547-896 00:13:39.843 # $HOSTNAME is the actual container id 00:13:39.843 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:13:39.843 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:13:39.843 # We can assume this is a mount from a host where container is running, 00:13:39.843 # so fetch its hostname to easily identify the target swarm worker. 00:13:39.843 container="$(< /etc/hostname) ($agent)" 00:13:39.843 else 00:13:39.843 # Fallback 00:13:39.843 container=$agent 00:13:39.843 fi 00:13:39.843 fi 00:13:39.843 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:13:39.843 ' 00:13:40.113 [Pipeline] } 00:13:40.129 [Pipeline] // withEnv 00:13:40.138 [Pipeline] setCustomBuildProperty 00:13:40.156 [Pipeline] stage 00:13:40.159 [Pipeline] { (Tests) 00:13:40.181 [Pipeline] sh 00:13:40.460 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:13:40.733 [Pipeline] sh 00:13:41.011 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:13:41.026 [Pipeline] timeout 00:13:41.027 Timeout set to expire in 1 hr 30 min 00:13:41.029 [Pipeline] { 00:13:41.045 [Pipeline] sh 00:13:41.343 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:13:42.718 HEAD is now at 5b0ad6d60 test/nvme/xnvme: Drop null_blk 00:13:42.732 [Pipeline] sh 00:13:43.010 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:13:43.281 [Pipeline] sh 00:13:43.561 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:43.877 [Pipeline] sh 00:13:44.155 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:13:44.414 ++ readlink -f spdk_repo 00:13:44.414 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:44.414 + [[ -n /home/vagrant/spdk_repo ]] 00:13:44.414 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:44.414 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:44.414 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:44.414 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:44.414 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:44.414 + [[ raid-vg-autotest == pkgdep-* ]] 00:13:44.414 + cd /home/vagrant/spdk_repo 00:13:44.414 + source /etc/os-release 00:13:44.414 ++ NAME='Fedora Linux' 00:13:44.414 ++ VERSION='39 (Cloud Edition)' 00:13:44.414 ++ ID=fedora 00:13:44.414 ++ VERSION_ID=39 00:13:44.414 ++ VERSION_CODENAME= 00:13:44.414 ++ PLATFORM_ID=platform:f39 00:13:44.414 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:13:44.414 ++ ANSI_COLOR='0;38;2;60;110;180' 00:13:44.414 ++ LOGO=fedora-logo-icon 00:13:44.414 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:13:44.414 ++ HOME_URL=https://fedoraproject.org/ 00:13:44.414 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:13:44.414 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:13:44.414 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:13:44.414 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:13:44.414 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:13:44.414 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:13:44.414 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:13:44.414 ++ SUPPORT_END=2024-11-12 00:13:44.414 ++ VARIANT='Cloud Edition' 00:13:44.414 ++ VARIANT_ID=cloud 00:13:44.414 + uname -a 00:13:44.414 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:13:44.414 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:44.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:44.672 Hugepages 00:13:44.672 node hugesize free / total 00:13:44.672 node0 1048576kB 0 / 0 00:13:44.672 node0 2048kB 0 / 0 00:13:44.672 00:13:44.672 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:44.672 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:44.672 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:13:44.931 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:13:44.931 + rm -f /tmp/spdk-ld-path 00:13:44.931 + source autorun-spdk.conf 00:13:44.931 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:44.931 ++ SPDK_RUN_ASAN=1 00:13:44.931 ++ SPDK_RUN_UBSAN=1 00:13:44.931 ++ SPDK_TEST_RAID=1 00:13:44.931 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:44.931 ++ RUN_NIGHTLY=0 00:13:44.931 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:44.931 + [[ -n '' ]] 00:13:44.931 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:44.931 + for M in /var/spdk/build-*-manifest.txt 00:13:44.931 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:13:44.931 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:44.931 + for M in /var/spdk/build-*-manifest.txt 00:13:44.931 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:44.931 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:44.931 + for M in /var/spdk/build-*-manifest.txt 00:13:44.931 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:44.931 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:44.931 ++ uname 00:13:44.931 + [[ Linux == \L\i\n\u\x ]] 00:13:44.931 + sudo dmesg -T 00:13:44.931 + sudo dmesg --clear 00:13:44.931 + dmesg_pid=4993 00:13:44.931 + sudo dmesg -Tw 00:13:44.931 + [[ Fedora Linux == FreeBSD ]] 00:13:44.931 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:44.931 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:44.931 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:44.931 + [[ -x /usr/src/fio-static/fio ]] 00:13:44.931 + export FIO_BIN=/usr/src/fio-static/fio 00:13:44.931 + FIO_BIN=/usr/src/fio-static/fio 00:13:44.931 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:44.931 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:44.931 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:44.931 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:44.931 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:44.931 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:44.931 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:44.931 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:44.931 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:45.193 17:01:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:13:45.193 17:01:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:45.193 17:01:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:45.193 17:01:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:13:45.193 17:01:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:13:45.193 17:01:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:13:45.193 17:01:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:45.193 17:01:21 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:13:45.193 17:01:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:13:45.193 17:01:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:45.193 17:01:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:13:45.193 17:01:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.193 17:01:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:13:45.193 17:01:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:45.193 17:01:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.193 17:01:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.193 17:01:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.193 17:01:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.193 17:01:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.193 17:01:21 -- paths/export.sh@5 -- $ export PATH 00:13:45.193 17:01:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.193 17:01:21 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:45.193 17:01:21 -- common/autobuild_common.sh@486 -- $ date +%s 00:13:45.193 17:01:21 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731085281.XXXXXX 00:13:45.193 17:01:21 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731085281.ZuS5CR 00:13:45.193 17:01:21 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:13:45.193 17:01:21 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:13:45.193 17:01:21 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:13:45.193 17:01:21 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:45.193 17:01:21 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:45.193 17:01:21 -- common/autobuild_common.sh@502 -- $ get_config_params 00:13:45.193 17:01:21 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:13:45.193 17:01:21 -- common/autotest_common.sh@10 -- $ set +x 00:13:45.193 17:01:21 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:13:45.193 17:01:21 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:13:45.193 17:01:21 -- pm/common@17 -- $ local monitor 00:13:45.193 17:01:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:45.193 17:01:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:45.193 17:01:21 -- pm/common@25 -- $ sleep 1 00:13:45.193 17:01:21 -- pm/common@21 -- $ date +%s 00:13:45.193 17:01:21 -- pm/common@21 -- $ date +%s 00:13:45.193 17:01:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731085281 00:13:45.193 17:01:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731085281 00:13:45.193 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731085281_collect-vmstat.pm.log 00:13:45.193 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731085281_collect-cpu-load.pm.log 00:13:46.167 17:01:22 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:13:46.167 17:01:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:46.167 17:01:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:46.167 17:01:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:46.167 17:01:22 -- spdk/autobuild.sh@16 -- $ date -u 00:13:46.167 Fri Nov 8 05:01:22 PM UTC 2024 00:13:46.167 17:01:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:46.167 v25.01-pre-184-g5b0ad6d60 00:13:46.167 17:01:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:13:46.167 17:01:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:13:46.167 17:01:22 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:13:46.167 17:01:22 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:13:46.167 17:01:22 -- common/autotest_common.sh@10 -- $ set +x 00:13:46.167 ************************************ 00:13:46.167 START TEST asan 00:13:46.167 ************************************ 00:13:46.167 using asan 00:13:46.167 17:01:22 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:13:46.167 00:13:46.167 real 0m0.000s 00:13:46.167 user 0m0.000s 00:13:46.167 sys 0m0.000s 00:13:46.167 ************************************ 00:13:46.167 END TEST asan 00:13:46.167 ************************************ 00:13:46.167 17:01:22 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:13:46.167 17:01:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:13:46.428 17:01:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:46.428 17:01:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:46.428 17:01:22 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:13:46.428 17:01:22 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:13:46.428 17:01:22 -- common/autotest_common.sh@10 -- $ set +x 00:13:46.428 ************************************ 00:13:46.428 START TEST ubsan 00:13:46.428 ************************************ 00:13:46.428 using ubsan 00:13:46.428 ************************************ 00:13:46.428 END TEST ubsan 00:13:46.428 ************************************ 00:13:46.428 17:01:22 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:13:46.428 00:13:46.428 real 0m0.000s 00:13:46.428 user 0m0.000s 00:13:46.428 sys 0m0.000s 00:13:46.428 17:01:22 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:13:46.428 17:01:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:13:46.428 17:01:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:46.428 17:01:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:46.428 17:01:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:46.428 17:01:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:46.428 17:01:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:46.428 17:01:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:13:46.428 17:01:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:13:46.428 17:01:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:13:46.428 17:01:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:13:46.428 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:46.428 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:46.998 Using 'verbs' RDMA provider 00:13:59.852 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:14:09.820 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:14:09.820 Creating mk/config.mk...done. 00:14:09.820 Creating mk/cc.flags.mk...done. 00:14:09.820 Type 'make' to build. 00:14:09.820 17:01:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:14:09.821 17:01:45 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:14:09.821 17:01:45 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:14:09.821 17:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:14:09.821 ************************************ 00:14:09.821 START TEST make 00:14:09.821 ************************************ 00:14:09.821 17:01:45 make -- common/autotest_common.sh@1127 -- $ make -j10 00:14:09.821 make[1]: Nothing to be done for 'all'. 00:14:19.792 The Meson build system 00:14:19.792 Version: 1.5.0 00:14:19.792 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:14:19.792 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:14:19.792 Build type: native build 00:14:19.792 Program cat found: YES (/usr/bin/cat) 00:14:19.792 Project name: DPDK 00:14:19.792 Project version: 24.03.0 00:14:19.792 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:14:19.792 C linker for the host machine: cc ld.bfd 2.40-14 00:14:19.792 Host machine cpu family: x86_64 00:14:19.792 Host machine cpu: x86_64 00:14:19.792 Message: ## Building in Developer Mode ## 00:14:19.792 Program pkg-config found: YES (/usr/bin/pkg-config) 00:14:19.793 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:14:19.793 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:14:19.793 Program python3 found: YES (/usr/bin/python3) 00:14:19.793 Program cat found: YES (/usr/bin/cat) 00:14:19.793 Compiler for C supports arguments -march=native: YES 00:14:19.793 Checking for size of "void *" : 8 00:14:19.793 Checking for size of "void *" : 8 (cached) 00:14:19.793 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:14:19.793 Library m found: YES 00:14:19.793 Library numa found: YES 00:14:19.793 Has header "numaif.h" : YES 00:14:19.793 Library fdt found: NO 00:14:19.793 Library execinfo found: NO 00:14:19.793 Has header "execinfo.h" : YES 00:14:19.793 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:14:19.793 Run-time dependency libarchive found: NO (tried pkgconfig) 00:14:19.793 Run-time dependency libbsd found: NO (tried pkgconfig) 00:14:19.793 Run-time dependency jansson found: NO (tried pkgconfig) 00:14:19.793 Run-time dependency openssl found: YES 3.1.1 00:14:19.793 Run-time dependency libpcap found: YES 1.10.4 00:14:19.793 Has header "pcap.h" with dependency libpcap: YES 00:14:19.793 Compiler for C supports arguments -Wcast-qual: YES 00:14:19.793 Compiler for C supports arguments -Wdeprecated: YES 00:14:19.793 Compiler for C supports arguments -Wformat: YES 00:14:19.793 Compiler for C supports arguments -Wformat-nonliteral: NO 00:14:19.793 Compiler for C supports arguments -Wformat-security: NO 00:14:19.793 Compiler for C supports arguments -Wmissing-declarations: YES 00:14:19.793 Compiler for C supports arguments -Wmissing-prototypes: YES 00:14:19.793 Compiler for C supports arguments -Wnested-externs: YES 00:14:19.793 Compiler for C supports arguments -Wold-style-definition: YES 00:14:19.793 Compiler for C supports arguments -Wpointer-arith: YES 00:14:19.793 Compiler for C supports arguments -Wsign-compare: YES 00:14:19.793 Compiler for C supports arguments -Wstrict-prototypes: YES 00:14:19.793 Compiler for C supports arguments -Wundef: YES 00:14:19.793 Compiler for C supports arguments -Wwrite-strings: YES 00:14:19.793 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:14:19.793 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:14:19.793 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:14:19.793 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:14:19.793 Program objdump found: YES (/usr/bin/objdump) 00:14:19.793 Compiler for C supports arguments -mavx512f: YES 00:14:19.793 Checking if "AVX512 checking" compiles: YES 00:14:19.793 Fetching value of define "__SSE4_2__" : 1 00:14:19.793 Fetching value of define "__AES__" : 1 00:14:19.793 Fetching value of define "__AVX__" : 1 00:14:19.793 Fetching value of define "__AVX2__" : 1 00:14:19.793 Fetching value of define "__AVX512BW__" : 1 00:14:19.793 Fetching value of define "__AVX512CD__" : 1 00:14:19.793 Fetching value of define "__AVX512DQ__" : 1 00:14:19.793 Fetching value of define "__AVX512F__" : 1 00:14:19.793 Fetching value of define "__AVX512VL__" : 1 00:14:19.793 Fetching value of define "__PCLMUL__" : 1 00:14:19.793 Fetching value of define "__RDRND__" : 1 00:14:19.793 Fetching value of define "__RDSEED__" : 1 00:14:19.793 Fetching value of define "__VPCLMULQDQ__" : 1 00:14:19.793 Fetching value of define "__znver1__" : (undefined) 00:14:19.793 Fetching value of define "__znver2__" : (undefined) 00:14:19.793 Fetching value of define "__znver3__" : (undefined) 00:14:19.793 Fetching value of define "__znver4__" : (undefined) 00:14:19.793 Library asan found: YES 00:14:19.793 Compiler for C supports arguments -Wno-format-truncation: YES 00:14:19.793 Message: lib/log: Defining dependency "log" 00:14:19.793 Message: lib/kvargs: Defining dependency "kvargs" 00:14:19.793 Message: lib/telemetry: Defining dependency "telemetry" 00:14:19.793 Library rt found: YES 00:14:19.793 Checking for function "getentropy" : NO 00:14:19.793 Message: lib/eal: Defining dependency "eal" 00:14:19.793 Message: lib/ring: Defining dependency "ring" 00:14:19.793 Message: lib/rcu: Defining dependency "rcu" 00:14:19.793 Message: lib/mempool: Defining dependency "mempool" 00:14:19.793 Message: lib/mbuf: Defining dependency "mbuf" 00:14:19.793 Fetching value of define "__PCLMUL__" : 1 (cached) 00:14:19.793 Fetching value of define "__AVX512F__" : 1 (cached) 00:14:19.793 Fetching value of define "__AVX512BW__" : 1 (cached) 00:14:19.793 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:14:19.793 Fetching value of define "__AVX512VL__" : 1 (cached) 00:14:19.793 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:14:19.793 Compiler for C supports arguments -mpclmul: YES 00:14:19.793 Compiler for C supports arguments -maes: YES 00:14:19.793 Compiler for C supports arguments -mavx512f: YES (cached) 00:14:19.793 Compiler for C supports arguments -mavx512bw: YES 00:14:19.793 Compiler for C supports arguments -mavx512dq: YES 00:14:19.793 Compiler for C supports arguments -mavx512vl: YES 00:14:19.793 Compiler for C supports arguments -mvpclmulqdq: YES 00:14:19.793 Compiler for C supports arguments -mavx2: YES 00:14:19.793 Compiler for C supports arguments -mavx: YES 00:14:19.793 Message: lib/net: Defining dependency "net" 00:14:19.793 Message: lib/meter: Defining dependency "meter" 00:14:19.793 Message: lib/ethdev: Defining dependency "ethdev" 00:14:19.793 Message: lib/pci: Defining dependency "pci" 00:14:19.793 Message: lib/cmdline: Defining dependency "cmdline" 00:14:19.793 Message: lib/hash: Defining dependency "hash" 00:14:19.793 Message: lib/timer: Defining dependency "timer" 00:14:19.793 Message: lib/compressdev: Defining dependency "compressdev" 00:14:19.793 Message: lib/cryptodev: Defining dependency "cryptodev" 00:14:19.793 Message: lib/dmadev: Defining dependency "dmadev" 00:14:19.793 Compiler for C supports arguments -Wno-cast-qual: YES 00:14:19.793 Message: lib/power: Defining dependency "power" 00:14:19.793 Message: lib/reorder: Defining dependency "reorder" 00:14:19.793 Message: lib/security: Defining dependency "security" 00:14:19.793 Has header "linux/userfaultfd.h" : YES 00:14:19.793 Has header "linux/vduse.h" : YES 00:14:19.793 Message: lib/vhost: Defining dependency "vhost" 00:14:19.793 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:14:19.793 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:14:19.793 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:14:19.793 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:14:19.793 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:14:19.793 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:14:19.793 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:14:19.793 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:14:19.793 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:14:19.793 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:14:19.793 Program doxygen found: YES (/usr/local/bin/doxygen) 00:14:19.793 Configuring doxy-api-html.conf using configuration 00:14:19.793 Configuring doxy-api-man.conf using configuration 00:14:19.793 Program mandb found: YES (/usr/bin/mandb) 00:14:19.793 Program sphinx-build found: NO 00:14:19.793 Configuring rte_build_config.h using configuration 00:14:19.793 Message: 00:14:19.793 ================= 00:14:19.793 Applications Enabled 00:14:19.793 ================= 00:14:19.793 00:14:19.793 apps: 00:14:19.793 00:14:19.793 00:14:19.793 Message: 00:14:19.793 ================= 00:14:19.793 Libraries Enabled 00:14:19.793 ================= 00:14:19.793 00:14:19.793 libs: 00:14:19.793 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:14:19.793 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:14:19.793 cryptodev, dmadev, power, reorder, security, vhost, 00:14:19.793 00:14:19.793 Message: 00:14:19.793 =============== 00:14:19.793 Drivers Enabled 00:14:19.793 =============== 00:14:19.793 00:14:19.793 common: 00:14:19.793 00:14:19.793 bus: 00:14:19.793 pci, vdev, 00:14:19.793 mempool: 00:14:19.793 ring, 00:14:19.793 dma: 00:14:19.793 00:14:19.793 net: 00:14:19.793 00:14:19.793 crypto: 00:14:19.793 00:14:19.793 compress: 00:14:19.793 00:14:19.793 vdpa: 00:14:19.793 00:14:19.793 00:14:19.793 Message: 00:14:19.793 ================= 00:14:19.793 Content Skipped 00:14:19.793 ================= 00:14:19.793 00:14:19.793 apps: 00:14:19.793 dumpcap: explicitly disabled via build config 00:14:19.793 graph: explicitly disabled via build config 00:14:19.793 pdump: explicitly disabled via build config 00:14:19.793 proc-info: explicitly disabled via build config 00:14:19.793 test-acl: explicitly disabled via build config 00:14:19.793 test-bbdev: explicitly disabled via build config 00:14:19.793 test-cmdline: explicitly disabled via build config 00:14:19.793 test-compress-perf: explicitly disabled via build config 00:14:19.793 test-crypto-perf: explicitly disabled via build config 00:14:19.793 test-dma-perf: explicitly disabled via build config 00:14:19.793 test-eventdev: explicitly disabled via build config 00:14:19.793 test-fib: explicitly disabled via build config 00:14:19.793 test-flow-perf: explicitly disabled via build config 00:14:19.793 test-gpudev: explicitly disabled via build config 00:14:19.793 test-mldev: explicitly disabled via build config 00:14:19.793 test-pipeline: explicitly disabled via build config 00:14:19.793 test-pmd: explicitly disabled via build config 00:14:19.793 test-regex: explicitly disabled via build config 00:14:19.793 test-sad: explicitly disabled via build config 00:14:19.793 test-security-perf: explicitly disabled via build config 00:14:19.793 00:14:19.793 libs: 00:14:19.793 argparse: explicitly disabled via build config 00:14:19.793 metrics: explicitly disabled via build config 00:14:19.793 acl: explicitly disabled via build config 00:14:19.793 bbdev: explicitly disabled via build config 00:14:19.793 bitratestats: explicitly disabled via build config 00:14:19.793 bpf: explicitly disabled via build config 00:14:19.793 cfgfile: explicitly disabled via build config 00:14:19.793 distributor: explicitly disabled via build config 00:14:19.793 efd: explicitly disabled via build config 00:14:19.794 eventdev: explicitly disabled via build config 00:14:19.794 dispatcher: explicitly disabled via build config 00:14:19.794 gpudev: explicitly disabled via build config 00:14:19.794 gro: explicitly disabled via build config 00:14:19.794 gso: explicitly disabled via build config 00:14:19.794 ip_frag: explicitly disabled via build config 00:14:19.794 jobstats: explicitly disabled via build config 00:14:19.794 latencystats: explicitly disabled via build config 00:14:19.794 lpm: explicitly disabled via build config 00:14:19.794 member: explicitly disabled via build config 00:14:19.794 pcapng: explicitly disabled via build config 00:14:19.794 rawdev: explicitly disabled via build config 00:14:19.794 regexdev: explicitly disabled via build config 00:14:19.794 mldev: explicitly disabled via build config 00:14:19.794 rib: explicitly disabled via build config 00:14:19.794 sched: explicitly disabled via build config 00:14:19.794 stack: explicitly disabled via build config 00:14:19.794 ipsec: explicitly disabled via build config 00:14:19.794 pdcp: explicitly disabled via build config 00:14:19.794 fib: explicitly disabled via build config 00:14:19.794 port: explicitly disabled via build config 00:14:19.794 pdump: explicitly disabled via build config 00:14:19.794 table: explicitly disabled via build config 00:14:19.794 pipeline: explicitly disabled via build config 00:14:19.794 graph: explicitly disabled via build config 00:14:19.794 node: explicitly disabled via build config 00:14:19.794 00:14:19.794 drivers: 00:14:19.794 common/cpt: not in enabled drivers build config 00:14:19.794 common/dpaax: not in enabled drivers build config 00:14:19.794 common/iavf: not in enabled drivers build config 00:14:19.794 common/idpf: not in enabled drivers build config 00:14:19.794 common/ionic: not in enabled drivers build config 00:14:19.794 common/mvep: not in enabled drivers build config 00:14:19.794 common/octeontx: not in enabled drivers build config 00:14:19.794 bus/auxiliary: not in enabled drivers build config 00:14:19.794 bus/cdx: not in enabled drivers build config 00:14:19.794 bus/dpaa: not in enabled drivers build config 00:14:19.794 bus/fslmc: not in enabled drivers build config 00:14:19.794 bus/ifpga: not in enabled drivers build config 00:14:19.794 bus/platform: not in enabled drivers build config 00:14:19.794 bus/uacce: not in enabled drivers build config 00:14:19.794 bus/vmbus: not in enabled drivers build config 00:14:19.794 common/cnxk: not in enabled drivers build config 00:14:19.794 common/mlx5: not in enabled drivers build config 00:14:19.794 common/nfp: not in enabled drivers build config 00:14:19.794 common/nitrox: not in enabled drivers build config 00:14:19.794 common/qat: not in enabled drivers build config 00:14:19.794 common/sfc_efx: not in enabled drivers build config 00:14:19.794 mempool/bucket: not in enabled drivers build config 00:14:19.794 mempool/cnxk: not in enabled drivers build config 00:14:19.794 mempool/dpaa: not in enabled drivers build config 00:14:19.794 mempool/dpaa2: not in enabled drivers build config 00:14:19.794 mempool/octeontx: not in enabled drivers build config 00:14:19.794 mempool/stack: not in enabled drivers build config 00:14:19.794 dma/cnxk: not in enabled drivers build config 00:14:19.794 dma/dpaa: not in enabled drivers build config 00:14:19.794 dma/dpaa2: not in enabled drivers build config 00:14:19.794 dma/hisilicon: not in enabled drivers build config 00:14:19.794 dma/idxd: not in enabled drivers build config 00:14:19.794 dma/ioat: not in enabled drivers build config 00:14:19.794 dma/skeleton: not in enabled drivers build config 00:14:19.794 net/af_packet: not in enabled drivers build config 00:14:19.794 net/af_xdp: not in enabled drivers build config 00:14:19.794 net/ark: not in enabled drivers build config 00:14:19.794 net/atlantic: not in enabled drivers build config 00:14:19.794 net/avp: not in enabled drivers build config 00:14:19.794 net/axgbe: not in enabled drivers build config 00:14:19.794 net/bnx2x: not in enabled drivers build config 00:14:19.794 net/bnxt: not in enabled drivers build config 00:14:19.794 net/bonding: not in enabled drivers build config 00:14:19.794 net/cnxk: not in enabled drivers build config 00:14:19.794 net/cpfl: not in enabled drivers build config 00:14:19.794 net/cxgbe: not in enabled drivers build config 00:14:19.794 net/dpaa: not in enabled drivers build config 00:14:19.794 net/dpaa2: not in enabled drivers build config 00:14:19.794 net/e1000: not in enabled drivers build config 00:14:19.794 net/ena: not in enabled drivers build config 00:14:19.794 net/enetc: not in enabled drivers build config 00:14:19.794 net/enetfec: not in enabled drivers build config 00:14:19.794 net/enic: not in enabled drivers build config 00:14:19.794 net/failsafe: not in enabled drivers build config 00:14:19.794 net/fm10k: not in enabled drivers build config 00:14:19.794 net/gve: not in enabled drivers build config 00:14:19.794 net/hinic: not in enabled drivers build config 00:14:19.794 net/hns3: not in enabled drivers build config 00:14:19.794 net/i40e: not in enabled drivers build config 00:14:19.794 net/iavf: not in enabled drivers build config 00:14:19.794 net/ice: not in enabled drivers build config 00:14:19.794 net/idpf: not in enabled drivers build config 00:14:19.794 net/igc: not in enabled drivers build config 00:14:19.794 net/ionic: not in enabled drivers build config 00:14:19.794 net/ipn3ke: not in enabled drivers build config 00:14:19.794 net/ixgbe: not in enabled drivers build config 00:14:19.794 net/mana: not in enabled drivers build config 00:14:19.794 net/memif: not in enabled drivers build config 00:14:19.794 net/mlx4: not in enabled drivers build config 00:14:19.794 net/mlx5: not in enabled drivers build config 00:14:19.794 net/mvneta: not in enabled drivers build config 00:14:19.794 net/mvpp2: not in enabled drivers build config 00:14:19.794 net/netvsc: not in enabled drivers build config 00:14:19.794 net/nfb: not in enabled drivers build config 00:14:19.794 net/nfp: not in enabled drivers build config 00:14:19.794 net/ngbe: not in enabled drivers build config 00:14:19.794 net/null: not in enabled drivers build config 00:14:19.794 net/octeontx: not in enabled drivers build config 00:14:19.794 net/octeon_ep: not in enabled drivers build config 00:14:19.794 net/pcap: not in enabled drivers build config 00:14:19.794 net/pfe: not in enabled drivers build config 00:14:19.794 net/qede: not in enabled drivers build config 00:14:19.794 net/ring: not in enabled drivers build config 00:14:19.794 net/sfc: not in enabled drivers build config 00:14:19.794 net/softnic: not in enabled drivers build config 00:14:19.794 net/tap: not in enabled drivers build config 00:14:19.794 net/thunderx: not in enabled drivers build config 00:14:19.794 net/txgbe: not in enabled drivers build config 00:14:19.794 net/vdev_netvsc: not in enabled drivers build config 00:14:19.794 net/vhost: not in enabled drivers build config 00:14:19.794 net/virtio: not in enabled drivers build config 00:14:19.794 net/vmxnet3: not in enabled drivers build config 00:14:19.794 raw/*: missing internal dependency, "rawdev" 00:14:19.794 crypto/armv8: not in enabled drivers build config 00:14:19.794 crypto/bcmfs: not in enabled drivers build config 00:14:19.794 crypto/caam_jr: not in enabled drivers build config 00:14:19.794 crypto/ccp: not in enabled drivers build config 00:14:19.794 crypto/cnxk: not in enabled drivers build config 00:14:19.794 crypto/dpaa_sec: not in enabled drivers build config 00:14:19.794 crypto/dpaa2_sec: not in enabled drivers build config 00:14:19.794 crypto/ipsec_mb: not in enabled drivers build config 00:14:19.794 crypto/mlx5: not in enabled drivers build config 00:14:19.794 crypto/mvsam: not in enabled drivers build config 00:14:19.794 crypto/nitrox: not in enabled drivers build config 00:14:19.794 crypto/null: not in enabled drivers build config 00:14:19.794 crypto/octeontx: not in enabled drivers build config 00:14:19.794 crypto/openssl: not in enabled drivers build config 00:14:19.794 crypto/scheduler: not in enabled drivers build config 00:14:19.794 crypto/uadk: not in enabled drivers build config 00:14:19.794 crypto/virtio: not in enabled drivers build config 00:14:19.794 compress/isal: not in enabled drivers build config 00:14:19.794 compress/mlx5: not in enabled drivers build config 00:14:19.794 compress/nitrox: not in enabled drivers build config 00:14:19.794 compress/octeontx: not in enabled drivers build config 00:14:19.794 compress/zlib: not in enabled drivers build config 00:14:19.794 regex/*: missing internal dependency, "regexdev" 00:14:19.794 ml/*: missing internal dependency, "mldev" 00:14:19.794 vdpa/ifc: not in enabled drivers build config 00:14:19.794 vdpa/mlx5: not in enabled drivers build config 00:14:19.794 vdpa/nfp: not in enabled drivers build config 00:14:19.794 vdpa/sfc: not in enabled drivers build config 00:14:19.794 event/*: missing internal dependency, "eventdev" 00:14:19.794 baseband/*: missing internal dependency, "bbdev" 00:14:19.794 gpu/*: missing internal dependency, "gpudev" 00:14:19.794 00:14:19.794 00:14:19.794 Build targets in project: 84 00:14:19.794 00:14:19.794 DPDK 24.03.0 00:14:19.794 00:14:19.794 User defined options 00:14:19.794 buildtype : debug 00:14:19.794 default_library : shared 00:14:19.794 libdir : lib 00:14:19.794 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:19.794 b_sanitize : address 00:14:19.794 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:14:19.794 c_link_args : 00:14:19.794 cpu_instruction_set: native 00:14:19.794 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:14:19.794 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:14:19.794 enable_docs : false 00:14:19.794 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:14:19.794 enable_kmods : false 00:14:19.794 max_lcores : 128 00:14:19.795 tests : false 00:14:19.795 00:14:19.795 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:14:19.795 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:14:19.795 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:14:19.795 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:14:19.795 [3/267] Linking static target lib/librte_kvargs.a 00:14:19.795 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:14:19.795 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:14:19.795 [6/267] Linking static target lib/librte_log.a 00:14:20.052 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:14:20.311 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:14:20.311 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:14:20.311 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:14:20.311 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:14:20.311 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:14:20.311 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:14:20.311 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:14:20.568 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:14:20.568 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:14:20.568 [17/267] Linking static target lib/librte_telemetry.a 00:14:20.568 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:14:20.826 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:14:20.826 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:14:20.826 [21/267] Linking target lib/librte_log.so.24.1 00:14:20.826 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:14:20.826 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:14:21.084 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:14:21.084 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:14:21.084 [26/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:14:21.084 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:14:21.084 [28/267] Linking target lib/librte_kvargs.so.24.1 00:14:21.084 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:14:21.342 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:14:21.342 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:14:21.342 [32/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:14:21.342 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:14:21.342 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:14:21.342 [35/267] Linking target lib/librte_telemetry.so.24.1 00:14:21.342 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:14:21.342 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:14:21.342 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:14:21.622 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:14:21.622 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:14:21.622 [41/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:14:21.622 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:14:21.622 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:14:21.622 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:14:21.622 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:14:21.879 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:14:21.879 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:14:21.879 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:14:21.879 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:14:21.879 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:14:22.137 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:14:22.137 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:14:22.137 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:14:22.137 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:14:22.137 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:14:22.137 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:14:22.395 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:14:22.395 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:14:22.395 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:14:22.395 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:14:22.395 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:14:22.395 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:14:22.395 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:14:22.395 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:14:22.652 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:14:22.652 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:14:22.652 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:14:22.652 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:14:22.915 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:14:22.915 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:14:22.915 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:14:22.915 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:14:22.915 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:14:22.915 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:14:22.915 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:14:22.915 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:14:23.174 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:14:23.174 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:14:23.174 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:14:23.174 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:14:23.174 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:14:23.431 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:14:23.431 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:14:23.431 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:14:23.431 [85/267] Linking static target lib/librte_eal.a 00:14:23.689 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:14:23.689 [87/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:14:23.689 [88/267] Linking static target lib/librte_ring.a 00:14:23.689 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:14:23.689 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:14:23.689 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:14:23.689 [92/267] Linking static target lib/librte_mempool.a 00:14:23.689 [93/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:14:23.946 [94/267] Linking static target lib/librte_rcu.a 00:14:23.946 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:14:23.946 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:14:24.204 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:14:24.204 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.204 [99/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:14:24.461 [100/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.461 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:14:24.461 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:14:24.461 [103/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:14:24.461 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:14:24.719 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:14:24.719 [106/267] Linking static target lib/librte_net.a 00:14:24.719 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:14:24.976 [108/267] Linking static target lib/librte_meter.a 00:14:24.976 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:14:24.976 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:14:24.976 [111/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.976 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:14:24.976 [113/267] Linking static target lib/librte_mbuf.a 00:14:24.976 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:14:24.976 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:14:25.235 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.235 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.493 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:14:25.493 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:14:25.493 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:14:25.776 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:14:26.035 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:14:26.035 [123/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:14:26.035 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:14:26.035 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:14:26.035 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:14:26.035 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:14:26.035 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:14:26.035 [129/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:14:26.035 [130/267] Linking static target lib/librte_pci.a 00:14:26.035 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:14:26.035 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:14:26.293 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:14:26.293 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:14:26.293 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:14:26.293 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:14:26.293 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:14:26.293 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:14:26.293 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:14:26.293 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:14:26.293 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:14:26.293 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:14:26.293 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:14:26.551 [144/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:26.551 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:14:26.551 [146/267] Linking static target lib/librte_cmdline.a 00:14:26.551 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:14:26.551 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:14:27.117 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:14:27.117 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:14:27.117 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:14:27.117 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:14:27.117 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:14:27.117 [154/267] Linking static target lib/librte_timer.a 00:14:27.117 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:14:27.117 [156/267] Linking static target lib/librte_compressdev.a 00:14:27.375 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:14:27.375 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:14:27.375 [159/267] Linking static target lib/librte_ethdev.a 00:14:27.375 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:14:27.375 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:14:27.632 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:14:27.632 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:14:27.632 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:14:27.632 [165/267] Linking static target lib/librte_hash.a 00:14:27.632 [166/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:14:27.632 [167/267] Linking static target lib/librte_dmadev.a 00:14:27.632 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:14:27.890 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:14:27.890 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:14:27.890 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:14:27.890 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:14:28.147 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:28.147 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:14:28.147 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:14:28.147 [176/267] Linking static target lib/librte_cryptodev.a 00:14:28.424 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:14:28.424 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:14:28.424 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:14:28.424 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:28.424 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:14:28.424 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:14:28.424 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:14:28.424 [184/267] Linking static target lib/librte_power.a 00:14:28.688 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:14:28.946 [186/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:14:28.946 [187/267] Linking static target lib/librte_security.a 00:14:28.946 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:14:28.946 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:14:28.946 [190/267] Linking static target lib/librte_reorder.a 00:14:29.204 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:14:29.205 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:14:29.205 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:14:29.462 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:14:29.462 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:14:29.462 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:14:29.720 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:14:29.720 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:14:29.720 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:14:29.978 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:14:29.978 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:14:29.978 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:14:29.978 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:14:30.236 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:14:30.236 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:14:30.236 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:14:30.236 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:14:30.236 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:30.494 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:14:30.494 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:14:30.494 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:14:30.494 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:30.494 [213/267] Linking static target drivers/librte_bus_vdev.a 00:14:30.751 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:30.751 [215/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:14:30.751 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:14:30.751 [217/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:14:30.751 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:30.751 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:30.752 [220/267] Linking static target drivers/librte_bus_pci.a 00:14:31.009 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:14:31.009 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:31.009 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:31.009 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:31.009 [225/267] Linking static target drivers/librte_mempool_ring.a 00:14:31.267 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:31.524 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:14:32.457 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:14:32.457 [229/267] Linking target lib/librte_eal.so.24.1 00:14:32.457 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:14:32.457 [231/267] Linking target lib/librte_timer.so.24.1 00:14:32.457 [232/267] Linking target lib/librte_meter.so.24.1 00:14:32.457 [233/267] Linking target lib/librte_pci.so.24.1 00:14:32.457 [234/267] Linking target lib/librte_ring.so.24.1 00:14:32.457 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:14:32.457 [236/267] Linking target lib/librte_dmadev.so.24.1 00:14:32.715 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:14:32.715 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:14:32.715 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:14:32.715 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:14:32.715 [241/267] Linking target lib/librte_rcu.so.24.1 00:14:32.715 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:14:32.715 [243/267] Linking target lib/librte_mempool.so.24.1 00:14:32.715 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:14:32.715 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:14:32.715 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:14:32.715 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:14:32.973 [248/267] Linking target lib/librte_mbuf.so.24.1 00:14:32.973 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:14:32.973 [250/267] Linking target lib/librte_reorder.so.24.1 00:14:32.973 [251/267] Linking target lib/librte_net.so.24.1 00:14:32.973 [252/267] Linking target lib/librte_compressdev.so.24.1 00:14:32.973 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:14:33.231 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:14:33.231 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:14:33.231 [256/267] Linking target lib/librte_cmdline.so.24.1 00:14:33.231 [257/267] Linking target lib/librte_security.so.24.1 00:14:33.231 [258/267] Linking target lib/librte_hash.so.24.1 00:14:33.231 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:14:33.796 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:33.796 [261/267] Linking target lib/librte_ethdev.so.24.1 00:14:33.796 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:14:34.053 [263/267] Linking target lib/librte_power.so.24.1 00:14:35.992 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:14:35.992 [265/267] Linking static target lib/librte_vhost.a 00:14:37.364 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:14:37.364 [267/267] Linking target lib/librte_vhost.so.24.1 00:14:37.364 INFO: autodetecting backend as ninja 00:14:37.364 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:14:59.288 CC lib/log/log.o 00:14:59.288 CC lib/log/log_flags.o 00:14:59.288 CC lib/log/log_deprecated.o 00:14:59.288 CC lib/ut_mock/mock.o 00:14:59.288 CC lib/ut/ut.o 00:14:59.288 LIB libspdk_log.a 00:14:59.288 LIB libspdk_ut_mock.a 00:14:59.288 LIB libspdk_ut.a 00:14:59.288 SO libspdk_log.so.7.1 00:14:59.288 SO libspdk_ut_mock.so.6.0 00:14:59.288 SO libspdk_ut.so.2.0 00:14:59.288 SYMLINK libspdk_log.so 00:14:59.288 SYMLINK libspdk_ut_mock.so 00:14:59.288 SYMLINK libspdk_ut.so 00:14:59.288 CC lib/dma/dma.o 00:14:59.288 CC lib/util/base64.o 00:14:59.288 CC lib/util/bit_array.o 00:14:59.288 CC lib/util/crc16.o 00:14:59.288 CC lib/util/cpuset.o 00:14:59.288 CC lib/util/crc32.o 00:14:59.288 CXX lib/trace_parser/trace.o 00:14:59.288 CC lib/util/crc32c.o 00:14:59.288 CC lib/ioat/ioat.o 00:14:59.288 CC lib/vfio_user/host/vfio_user_pci.o 00:14:59.288 CC lib/util/crc32_ieee.o 00:14:59.288 CC lib/util/crc64.o 00:14:59.288 CC lib/util/dif.o 00:14:59.288 CC lib/util/fd.o 00:14:59.288 LIB libspdk_dma.a 00:14:59.288 CC lib/util/fd_group.o 00:14:59.288 SO libspdk_dma.so.5.0 00:14:59.288 CC lib/vfio_user/host/vfio_user.o 00:14:59.288 CC lib/util/file.o 00:14:59.288 CC lib/util/hexlify.o 00:14:59.288 SYMLINK libspdk_dma.so 00:14:59.288 CC lib/util/iov.o 00:14:59.288 LIB libspdk_ioat.a 00:14:59.288 CC lib/util/math.o 00:14:59.288 SO libspdk_ioat.so.7.0 00:14:59.288 SYMLINK libspdk_ioat.so 00:14:59.288 CC lib/util/net.o 00:14:59.288 CC lib/util/pipe.o 00:14:59.288 CC lib/util/strerror_tls.o 00:14:59.288 CC lib/util/string.o 00:14:59.288 CC lib/util/uuid.o 00:14:59.288 CC lib/util/xor.o 00:14:59.288 LIB libspdk_vfio_user.a 00:14:59.288 CC lib/util/zipf.o 00:14:59.288 SO libspdk_vfio_user.so.5.0 00:14:59.288 CC lib/util/md5.o 00:14:59.288 SYMLINK libspdk_vfio_user.so 00:14:59.288 LIB libspdk_util.a 00:14:59.288 SO libspdk_util.so.10.1 00:14:59.547 SYMLINK libspdk_util.so 00:14:59.547 LIB libspdk_trace_parser.a 00:14:59.547 SO libspdk_trace_parser.so.6.0 00:14:59.547 SYMLINK libspdk_trace_parser.so 00:14:59.547 CC lib/conf/conf.o 00:14:59.547 CC lib/env_dpdk/env.o 00:14:59.547 CC lib/idxd/idxd.o 00:14:59.547 CC lib/json/json_util.o 00:14:59.547 CC lib/json/json_parse.o 00:14:59.547 CC lib/env_dpdk/memory.o 00:14:59.547 CC lib/env_dpdk/pci.o 00:14:59.547 CC lib/idxd/idxd_user.o 00:14:59.547 CC lib/rdma_utils/rdma_utils.o 00:14:59.547 CC lib/vmd/vmd.o 00:14:59.821 LIB libspdk_conf.a 00:14:59.821 SO libspdk_conf.so.6.0 00:14:59.821 CC lib/idxd/idxd_kernel.o 00:14:59.821 LIB libspdk_rdma_utils.a 00:14:59.821 CC lib/json/json_write.o 00:14:59.821 SYMLINK libspdk_conf.so 00:14:59.821 CC lib/vmd/led.o 00:14:59.821 SO libspdk_rdma_utils.so.1.0 00:14:59.821 CC lib/env_dpdk/init.o 00:15:00.079 SYMLINK libspdk_rdma_utils.so 00:15:00.080 CC lib/env_dpdk/threads.o 00:15:00.080 CC lib/env_dpdk/pci_ioat.o 00:15:00.080 CC lib/env_dpdk/pci_virtio.o 00:15:00.080 CC lib/env_dpdk/pci_vmd.o 00:15:00.080 CC lib/env_dpdk/pci_idxd.o 00:15:00.080 LIB libspdk_json.a 00:15:00.080 SO libspdk_json.so.6.0 00:15:00.080 CC lib/env_dpdk/pci_event.o 00:15:00.080 CC lib/env_dpdk/sigbus_handler.o 00:15:00.080 SYMLINK libspdk_json.so 00:15:00.080 CC lib/env_dpdk/pci_dpdk.o 00:15:00.338 CC lib/rdma_provider/common.o 00:15:00.338 CC lib/env_dpdk/pci_dpdk_2207.o 00:15:00.338 CC lib/env_dpdk/pci_dpdk_2211.o 00:15:00.338 CC lib/rdma_provider/rdma_provider_verbs.o 00:15:00.338 LIB libspdk_idxd.a 00:15:00.338 LIB libspdk_vmd.a 00:15:00.338 SO libspdk_idxd.so.12.1 00:15:00.338 SO libspdk_vmd.so.6.0 00:15:00.338 SYMLINK libspdk_idxd.so 00:15:00.338 SYMLINK libspdk_vmd.so 00:15:00.596 LIB libspdk_rdma_provider.a 00:15:00.596 CC lib/jsonrpc/jsonrpc_server.o 00:15:00.596 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:15:00.596 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:15:00.596 CC lib/jsonrpc/jsonrpc_client.o 00:15:00.596 SO libspdk_rdma_provider.so.7.0 00:15:00.596 SYMLINK libspdk_rdma_provider.so 00:15:00.853 LIB libspdk_jsonrpc.a 00:15:00.853 SO libspdk_jsonrpc.so.6.0 00:15:00.853 SYMLINK libspdk_jsonrpc.so 00:15:01.111 CC lib/rpc/rpc.o 00:15:01.111 LIB libspdk_env_dpdk.a 00:15:01.369 SO libspdk_env_dpdk.so.15.1 00:15:01.369 LIB libspdk_rpc.a 00:15:01.369 SO libspdk_rpc.so.6.0 00:15:01.369 SYMLINK libspdk_env_dpdk.so 00:15:01.369 SYMLINK libspdk_rpc.so 00:15:01.626 CC lib/trace/trace.o 00:15:01.626 CC lib/trace/trace_flags.o 00:15:01.626 CC lib/trace/trace_rpc.o 00:15:01.626 CC lib/notify/notify.o 00:15:01.626 CC lib/notify/notify_rpc.o 00:15:01.626 CC lib/keyring/keyring_rpc.o 00:15:01.626 CC lib/keyring/keyring.o 00:15:01.884 LIB libspdk_notify.a 00:15:01.884 SO libspdk_notify.so.6.0 00:15:01.884 SYMLINK libspdk_notify.so 00:15:01.884 LIB libspdk_trace.a 00:15:01.884 LIB libspdk_keyring.a 00:15:01.884 SO libspdk_keyring.so.2.0 00:15:01.884 SO libspdk_trace.so.11.0 00:15:01.884 SYMLINK libspdk_keyring.so 00:15:01.884 SYMLINK libspdk_trace.so 00:15:02.143 CC lib/sock/sock_rpc.o 00:15:02.143 CC lib/sock/sock.o 00:15:02.143 CC lib/thread/thread.o 00:15:02.143 CC lib/thread/iobuf.o 00:15:02.707 LIB libspdk_sock.a 00:15:02.707 SO libspdk_sock.so.10.0 00:15:02.707 SYMLINK libspdk_sock.so 00:15:02.964 CC lib/nvme/nvme_ctrlr.o 00:15:02.965 CC lib/nvme/nvme_ctrlr_cmd.o 00:15:02.965 CC lib/nvme/nvme_ns.o 00:15:02.965 CC lib/nvme/nvme_fabric.o 00:15:02.965 CC lib/nvme/nvme_pcie_common.o 00:15:02.965 CC lib/nvme/nvme_ns_cmd.o 00:15:02.965 CC lib/nvme/nvme_pcie.o 00:15:02.965 CC lib/nvme/nvme_qpair.o 00:15:02.965 CC lib/nvme/nvme.o 00:15:03.529 LIB libspdk_thread.a 00:15:03.529 SO libspdk_thread.so.11.0 00:15:03.529 SYMLINK libspdk_thread.so 00:15:03.529 CC lib/nvme/nvme_quirks.o 00:15:03.529 CC lib/nvme/nvme_transport.o 00:15:03.529 CC lib/nvme/nvme_discovery.o 00:15:03.529 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:15:03.529 CC lib/accel/accel.o 00:15:03.786 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:15:03.787 CC lib/nvme/nvme_tcp.o 00:15:03.787 CC lib/nvme/nvme_opal.o 00:15:04.044 CC lib/nvme/nvme_io_msg.o 00:15:04.044 CC lib/nvme/nvme_poll_group.o 00:15:04.044 CC lib/nvme/nvme_zns.o 00:15:04.044 CC lib/accel/accel_rpc.o 00:15:04.302 CC lib/nvme/nvme_stubs.o 00:15:04.302 CC lib/accel/accel_sw.o 00:15:04.302 CC lib/nvme/nvme_auth.o 00:15:04.560 CC lib/nvme/nvme_cuse.o 00:15:04.560 CC lib/nvme/nvme_rdma.o 00:15:04.819 CC lib/blob/blobstore.o 00:15:04.819 CC lib/virtio/virtio.o 00:15:04.819 CC lib/init/json_config.o 00:15:04.819 LIB libspdk_accel.a 00:15:04.819 SO libspdk_accel.so.16.0 00:15:04.819 CC lib/fsdev/fsdev.o 00:15:05.076 SYMLINK libspdk_accel.so 00:15:05.076 CC lib/init/subsystem.o 00:15:05.076 CC lib/virtio/virtio_vhost_user.o 00:15:05.076 CC lib/virtio/virtio_vfio_user.o 00:15:05.076 CC lib/init/subsystem_rpc.o 00:15:05.076 CC lib/virtio/virtio_pci.o 00:15:05.077 CC lib/init/rpc.o 00:15:05.334 CC lib/blob/request.o 00:15:05.335 LIB libspdk_init.a 00:15:05.335 CC lib/blob/zeroes.o 00:15:05.335 CC lib/blob/blob_bs_dev.o 00:15:05.335 SO libspdk_init.so.6.0 00:15:05.335 CC lib/bdev/bdev.o 00:15:05.335 CC lib/bdev/bdev_rpc.o 00:15:05.335 SYMLINK libspdk_init.so 00:15:05.335 CC lib/bdev/bdev_zone.o 00:15:05.592 LIB libspdk_virtio.a 00:15:05.592 SO libspdk_virtio.so.7.0 00:15:05.592 CC lib/fsdev/fsdev_io.o 00:15:05.592 SYMLINK libspdk_virtio.so 00:15:05.593 CC lib/bdev/part.o 00:15:05.593 CC lib/fsdev/fsdev_rpc.o 00:15:05.593 CC lib/bdev/scsi_nvme.o 00:15:05.593 CC lib/event/app.o 00:15:05.593 CC lib/event/reactor.o 00:15:05.850 CC lib/event/log_rpc.o 00:15:05.850 CC lib/event/app_rpc.o 00:15:05.850 CC lib/event/scheduler_static.o 00:15:05.850 LIB libspdk_fsdev.a 00:15:06.107 SO libspdk_fsdev.so.2.0 00:15:06.107 SYMLINK libspdk_fsdev.so 00:15:06.107 LIB libspdk_nvme.a 00:15:06.107 LIB libspdk_event.a 00:15:06.107 SO libspdk_event.so.14.0 00:15:06.381 SO libspdk_nvme.so.15.0 00:15:06.381 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:15:06.381 SYMLINK libspdk_event.so 00:15:06.381 SYMLINK libspdk_nvme.so 00:15:06.967 LIB libspdk_fuse_dispatcher.a 00:15:06.967 SO libspdk_fuse_dispatcher.so.1.0 00:15:06.967 SYMLINK libspdk_fuse_dispatcher.so 00:15:08.370 LIB libspdk_bdev.a 00:15:08.370 SO libspdk_bdev.so.17.0 00:15:08.370 LIB libspdk_blob.a 00:15:08.370 SYMLINK libspdk_bdev.so 00:15:08.370 SO libspdk_blob.so.11.0 00:15:08.628 SYMLINK libspdk_blob.so 00:15:08.628 CC lib/ublk/ublk.o 00:15:08.628 CC lib/nvmf/ctrlr.o 00:15:08.628 CC lib/ublk/ublk_rpc.o 00:15:08.628 CC lib/nbd/nbd.o 00:15:08.628 CC lib/nvmf/ctrlr_discovery.o 00:15:08.628 CC lib/nbd/nbd_rpc.o 00:15:08.628 CC lib/scsi/dev.o 00:15:08.628 CC lib/ftl/ftl_core.o 00:15:08.628 CC lib/lvol/lvol.o 00:15:08.628 CC lib/blobfs/blobfs.o 00:15:08.628 CC lib/blobfs/tree.o 00:15:08.628 CC lib/nvmf/ctrlr_bdev.o 00:15:08.885 CC lib/scsi/lun.o 00:15:08.885 CC lib/scsi/port.o 00:15:08.885 CC lib/ftl/ftl_init.o 00:15:08.885 CC lib/scsi/scsi.o 00:15:09.143 LIB libspdk_nbd.a 00:15:09.143 SO libspdk_nbd.so.7.0 00:15:09.143 CC lib/scsi/scsi_bdev.o 00:15:09.143 CC lib/ftl/ftl_layout.o 00:15:09.143 SYMLINK libspdk_nbd.so 00:15:09.143 CC lib/ftl/ftl_debug.o 00:15:09.143 CC lib/ftl/ftl_io.o 00:15:09.143 CC lib/scsi/scsi_pr.o 00:15:09.401 LIB libspdk_ublk.a 00:15:09.401 SO libspdk_ublk.so.3.0 00:15:09.401 LIB libspdk_blobfs.a 00:15:09.401 CC lib/ftl/ftl_sb.o 00:15:09.401 CC lib/ftl/ftl_l2p.o 00:15:09.401 SYMLINK libspdk_ublk.so 00:15:09.401 SO libspdk_blobfs.so.10.0 00:15:09.401 CC lib/ftl/ftl_l2p_flat.o 00:15:09.401 CC lib/nvmf/subsystem.o 00:15:09.401 CC lib/nvmf/nvmf.o 00:15:09.401 LIB libspdk_lvol.a 00:15:09.401 SYMLINK libspdk_blobfs.so 00:15:09.401 CC lib/nvmf/nvmf_rpc.o 00:15:09.401 CC lib/nvmf/transport.o 00:15:09.401 SO libspdk_lvol.so.10.0 00:15:09.659 CC lib/ftl/ftl_nv_cache.o 00:15:09.659 SYMLINK libspdk_lvol.so 00:15:09.659 CC lib/scsi/scsi_rpc.o 00:15:09.659 CC lib/ftl/ftl_band.o 00:15:09.659 CC lib/scsi/task.o 00:15:09.659 CC lib/ftl/ftl_band_ops.o 00:15:09.659 CC lib/ftl/ftl_writer.o 00:15:09.916 LIB libspdk_scsi.a 00:15:09.916 SO libspdk_scsi.so.9.0 00:15:09.916 CC lib/ftl/ftl_rq.o 00:15:09.916 SYMLINK libspdk_scsi.so 00:15:09.916 CC lib/ftl/ftl_reloc.o 00:15:09.916 CC lib/ftl/ftl_l2p_cache.o 00:15:09.916 CC lib/ftl/ftl_p2l.o 00:15:09.916 CC lib/ftl/ftl_p2l_log.o 00:15:10.174 CC lib/nvmf/tcp.o 00:15:10.432 CC lib/ftl/mngt/ftl_mngt.o 00:15:10.432 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:15:10.432 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:15:10.432 CC lib/ftl/mngt/ftl_mngt_startup.o 00:15:10.432 CC lib/ftl/mngt/ftl_mngt_md.o 00:15:10.432 CC lib/nvmf/stubs.o 00:15:10.432 CC lib/ftl/mngt/ftl_mngt_misc.o 00:15:10.432 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:15:10.690 CC lib/nvmf/mdns_server.o 00:15:10.690 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:15:10.690 CC lib/nvmf/rdma.o 00:15:10.690 CC lib/nvmf/auth.o 00:15:10.690 CC lib/ftl/mngt/ftl_mngt_band.o 00:15:10.690 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:15:10.690 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:15:10.948 CC lib/iscsi/conn.o 00:15:10.948 CC lib/iscsi/init_grp.o 00:15:10.948 CC lib/vhost/vhost.o 00:15:10.948 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:15:10.948 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:15:10.948 CC lib/ftl/utils/ftl_conf.o 00:15:10.948 CC lib/ftl/utils/ftl_md.o 00:15:11.206 CC lib/iscsi/iscsi.o 00:15:11.206 CC lib/iscsi/param.o 00:15:11.206 CC lib/iscsi/portal_grp.o 00:15:11.465 CC lib/iscsi/tgt_node.o 00:15:11.465 CC lib/iscsi/iscsi_subsystem.o 00:15:11.465 CC lib/iscsi/iscsi_rpc.o 00:15:11.465 CC lib/iscsi/task.o 00:15:11.465 CC lib/ftl/utils/ftl_mempool.o 00:15:11.465 CC lib/ftl/utils/ftl_bitmap.o 00:15:11.722 CC lib/ftl/utils/ftl_property.o 00:15:11.722 CC lib/vhost/vhost_rpc.o 00:15:11.722 CC lib/vhost/vhost_scsi.o 00:15:11.722 CC lib/vhost/vhost_blk.o 00:15:11.722 CC lib/vhost/rte_vhost_user.o 00:15:11.980 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:15:11.980 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:15:11.980 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:15:11.980 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:15:12.239 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:15:12.239 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:15:12.239 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:15:12.239 CC lib/ftl/upgrade/ftl_sb_v3.o 00:15:12.239 CC lib/ftl/upgrade/ftl_sb_v5.o 00:15:12.239 CC lib/ftl/nvc/ftl_nvc_dev.o 00:15:12.239 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:15:12.239 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:15:12.497 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:15:12.497 LIB libspdk_nvmf.a 00:15:12.497 CC lib/ftl/base/ftl_base_dev.o 00:15:12.497 CC lib/ftl/base/ftl_base_bdev.o 00:15:12.497 CC lib/ftl/ftl_trace.o 00:15:12.497 SO libspdk_nvmf.so.20.0 00:15:12.754 LIB libspdk_iscsi.a 00:15:12.754 SO libspdk_iscsi.so.8.0 00:15:12.754 SYMLINK libspdk_nvmf.so 00:15:12.754 LIB libspdk_ftl.a 00:15:12.754 LIB libspdk_vhost.a 00:15:12.754 SYMLINK libspdk_iscsi.so 00:15:13.012 SO libspdk_vhost.so.8.0 00:15:13.012 SO libspdk_ftl.so.9.0 00:15:13.012 SYMLINK libspdk_vhost.so 00:15:13.279 SYMLINK libspdk_ftl.so 00:15:13.536 CC module/env_dpdk/env_dpdk_rpc.o 00:15:13.536 CC module/keyring/file/keyring.o 00:15:13.536 CC module/scheduler/dynamic/scheduler_dynamic.o 00:15:13.536 CC module/accel/error/accel_error.o 00:15:13.536 CC module/fsdev/aio/fsdev_aio.o 00:15:13.536 CC module/sock/posix/posix.o 00:15:13.536 CC module/accel/ioat/accel_ioat.o 00:15:13.536 CC module/keyring/linux/keyring.o 00:15:13.536 CC module/accel/dsa/accel_dsa.o 00:15:13.536 CC module/blob/bdev/blob_bdev.o 00:15:13.536 LIB libspdk_env_dpdk_rpc.a 00:15:13.536 SO libspdk_env_dpdk_rpc.so.6.0 00:15:13.794 CC module/keyring/file/keyring_rpc.o 00:15:13.794 CC module/keyring/linux/keyring_rpc.o 00:15:13.794 SYMLINK libspdk_env_dpdk_rpc.so 00:15:13.794 CC module/accel/error/accel_error_rpc.o 00:15:13.794 LIB libspdk_scheduler_dynamic.a 00:15:13.794 CC module/accel/ioat/accel_ioat_rpc.o 00:15:13.794 SO libspdk_scheduler_dynamic.so.4.0 00:15:13.794 CC module/fsdev/aio/fsdev_aio_rpc.o 00:15:13.794 CC module/accel/dsa/accel_dsa_rpc.o 00:15:13.794 SYMLINK libspdk_scheduler_dynamic.so 00:15:13.794 LIB libspdk_keyring_file.a 00:15:13.794 LIB libspdk_keyring_linux.a 00:15:13.794 LIB libspdk_blob_bdev.a 00:15:13.794 LIB libspdk_accel_error.a 00:15:13.794 SO libspdk_keyring_file.so.2.0 00:15:13.794 SO libspdk_keyring_linux.so.1.0 00:15:13.794 LIB libspdk_accel_ioat.a 00:15:13.794 SO libspdk_blob_bdev.so.11.0 00:15:13.794 SO libspdk_accel_error.so.2.0 00:15:13.794 SO libspdk_accel_ioat.so.6.0 00:15:14.074 LIB libspdk_accel_dsa.a 00:15:14.074 SYMLINK libspdk_keyring_file.so 00:15:14.074 SYMLINK libspdk_keyring_linux.so 00:15:14.074 CC module/fsdev/aio/linux_aio_mgr.o 00:15:14.074 SYMLINK libspdk_blob_bdev.so 00:15:14.074 SYMLINK libspdk_accel_ioat.so 00:15:14.074 SO libspdk_accel_dsa.so.5.0 00:15:14.074 SYMLINK libspdk_accel_error.so 00:15:14.074 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:15:14.074 SYMLINK libspdk_accel_dsa.so 00:15:14.074 CC module/accel/iaa/accel_iaa.o 00:15:14.074 CC module/scheduler/gscheduler/gscheduler.o 00:15:14.074 CC module/accel/iaa/accel_iaa_rpc.o 00:15:14.074 LIB libspdk_scheduler_dpdk_governor.a 00:15:14.074 SO libspdk_scheduler_dpdk_governor.so.4.0 00:15:14.074 CC module/blobfs/bdev/blobfs_bdev.o 00:15:14.074 CC module/bdev/delay/vbdev_delay.o 00:15:14.074 CC module/bdev/gpt/gpt.o 00:15:14.332 CC module/bdev/error/vbdev_error.o 00:15:14.332 LIB libspdk_fsdev_aio.a 00:15:14.332 SYMLINK libspdk_scheduler_dpdk_governor.so 00:15:14.332 CC module/bdev/delay/vbdev_delay_rpc.o 00:15:14.332 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:15:14.332 SO libspdk_fsdev_aio.so.1.0 00:15:14.332 LIB libspdk_scheduler_gscheduler.a 00:15:14.332 LIB libspdk_sock_posix.a 00:15:14.332 SO libspdk_scheduler_gscheduler.so.4.0 00:15:14.332 LIB libspdk_accel_iaa.a 00:15:14.332 SO libspdk_sock_posix.so.6.0 00:15:14.332 SYMLINK libspdk_fsdev_aio.so 00:15:14.332 SO libspdk_accel_iaa.so.3.0 00:15:14.332 SYMLINK libspdk_scheduler_gscheduler.so 00:15:14.332 CC module/bdev/gpt/vbdev_gpt.o 00:15:14.332 CC module/bdev/error/vbdev_error_rpc.o 00:15:14.332 SYMLINK libspdk_sock_posix.so 00:15:14.332 SYMLINK libspdk_accel_iaa.so 00:15:14.332 LIB libspdk_blobfs_bdev.a 00:15:14.332 SO libspdk_blobfs_bdev.so.6.0 00:15:14.591 SYMLINK libspdk_blobfs_bdev.so 00:15:14.591 LIB libspdk_bdev_delay.a 00:15:14.591 SO libspdk_bdev_delay.so.6.0 00:15:14.591 LIB libspdk_bdev_error.a 00:15:14.591 CC module/bdev/lvol/vbdev_lvol.o 00:15:14.591 CC module/bdev/malloc/bdev_malloc.o 00:15:14.591 CC module/bdev/nvme/bdev_nvme.o 00:15:14.591 CC module/bdev/passthru/vbdev_passthru.o 00:15:14.591 SO libspdk_bdev_error.so.6.0 00:15:14.591 CC module/bdev/null/bdev_null.o 00:15:14.591 SYMLINK libspdk_bdev_delay.so 00:15:14.591 CC module/bdev/malloc/bdev_malloc_rpc.o 00:15:14.591 CC module/bdev/raid/bdev_raid.o 00:15:14.591 SYMLINK libspdk_bdev_error.so 00:15:14.591 CC module/bdev/raid/bdev_raid_rpc.o 00:15:14.591 LIB libspdk_bdev_gpt.a 00:15:14.591 CC module/bdev/split/vbdev_split.o 00:15:14.591 SO libspdk_bdev_gpt.so.6.0 00:15:14.591 SYMLINK libspdk_bdev_gpt.so 00:15:14.851 CC module/bdev/split/vbdev_split_rpc.o 00:15:14.851 CC module/bdev/null/bdev_null_rpc.o 00:15:14.851 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:15:14.851 CC module/bdev/raid/bdev_raid_sb.o 00:15:14.851 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:15:14.851 LIB libspdk_bdev_split.a 00:15:14.851 LIB libspdk_bdev_null.a 00:15:14.851 SO libspdk_bdev_split.so.6.0 00:15:14.851 LIB libspdk_bdev_malloc.a 00:15:14.851 SO libspdk_bdev_null.so.6.0 00:15:14.851 SO libspdk_bdev_malloc.so.6.0 00:15:14.851 SYMLINK libspdk_bdev_split.so 00:15:14.851 CC module/bdev/raid/raid0.o 00:15:15.109 LIB libspdk_bdev_passthru.a 00:15:15.109 SYMLINK libspdk_bdev_null.so 00:15:15.109 SYMLINK libspdk_bdev_malloc.so 00:15:15.109 CC module/bdev/raid/raid1.o 00:15:15.109 CC module/bdev/raid/concat.o 00:15:15.109 SO libspdk_bdev_passthru.so.6.0 00:15:15.109 CC module/bdev/zone_block/vbdev_zone_block.o 00:15:15.109 SYMLINK libspdk_bdev_passthru.so 00:15:15.109 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:15:15.109 CC module/bdev/raid/raid5f.o 00:15:15.109 LIB libspdk_bdev_lvol.a 00:15:15.109 SO libspdk_bdev_lvol.so.6.0 00:15:15.109 CC module/bdev/aio/bdev_aio.o 00:15:15.367 SYMLINK libspdk_bdev_lvol.so 00:15:15.367 CC module/bdev/aio/bdev_aio_rpc.o 00:15:15.367 CC module/bdev/nvme/bdev_nvme_rpc.o 00:15:15.367 CC module/bdev/ftl/bdev_ftl.o 00:15:15.367 LIB libspdk_bdev_zone_block.a 00:15:15.367 CC module/bdev/virtio/bdev_virtio_scsi.o 00:15:15.367 CC module/bdev/iscsi/bdev_iscsi.o 00:15:15.367 SO libspdk_bdev_zone_block.so.6.0 00:15:15.367 CC module/bdev/nvme/nvme_rpc.o 00:15:15.367 SYMLINK libspdk_bdev_zone_block.so 00:15:15.367 CC module/bdev/ftl/bdev_ftl_rpc.o 00:15:15.625 LIB libspdk_bdev_aio.a 00:15:15.625 SO libspdk_bdev_aio.so.6.0 00:15:15.625 CC module/bdev/virtio/bdev_virtio_blk.o 00:15:15.625 LIB libspdk_bdev_raid.a 00:15:15.625 SYMLINK libspdk_bdev_aio.so 00:15:15.625 CC module/bdev/virtio/bdev_virtio_rpc.o 00:15:15.625 CC module/bdev/nvme/bdev_mdns_client.o 00:15:15.625 CC module/bdev/nvme/vbdev_opal.o 00:15:15.625 LIB libspdk_bdev_ftl.a 00:15:15.625 SO libspdk_bdev_ftl.so.6.0 00:15:15.625 SO libspdk_bdev_raid.so.6.0 00:15:15.919 SYMLINK libspdk_bdev_ftl.so 00:15:15.919 CC module/bdev/nvme/vbdev_opal_rpc.o 00:15:15.919 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:15:15.919 SYMLINK libspdk_bdev_raid.so 00:15:15.919 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:15:15.919 LIB libspdk_bdev_virtio.a 00:15:15.919 LIB libspdk_bdev_iscsi.a 00:15:15.919 SO libspdk_bdev_virtio.so.6.0 00:15:15.919 SO libspdk_bdev_iscsi.so.6.0 00:15:16.179 SYMLINK libspdk_bdev_iscsi.so 00:15:16.179 SYMLINK libspdk_bdev_virtio.so 00:15:17.123 LIB libspdk_bdev_nvme.a 00:15:17.123 SO libspdk_bdev_nvme.so.7.1 00:15:17.123 SYMLINK libspdk_bdev_nvme.so 00:15:17.689 CC module/event/subsystems/iobuf/iobuf.o 00:15:17.689 CC module/event/subsystems/sock/sock.o 00:15:17.689 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:15:17.689 CC module/event/subsystems/keyring/keyring.o 00:15:17.689 CC module/event/subsystems/vmd/vmd.o 00:15:17.689 CC module/event/subsystems/vmd/vmd_rpc.o 00:15:17.689 CC module/event/subsystems/scheduler/scheduler.o 00:15:17.689 CC module/event/subsystems/fsdev/fsdev.o 00:15:17.689 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:15:17.689 LIB libspdk_event_keyring.a 00:15:17.689 LIB libspdk_event_sock.a 00:15:17.689 LIB libspdk_event_fsdev.a 00:15:17.689 LIB libspdk_event_scheduler.a 00:15:17.689 LIB libspdk_event_iobuf.a 00:15:17.689 SO libspdk_event_keyring.so.1.0 00:15:17.689 LIB libspdk_event_vmd.a 00:15:17.689 SO libspdk_event_sock.so.5.0 00:15:17.689 LIB libspdk_event_vhost_blk.a 00:15:17.689 SO libspdk_event_fsdev.so.1.0 00:15:17.689 SO libspdk_event_scheduler.so.4.0 00:15:17.689 SO libspdk_event_iobuf.so.3.0 00:15:17.689 SO libspdk_event_vmd.so.6.0 00:15:17.689 SO libspdk_event_vhost_blk.so.3.0 00:15:17.689 SYMLINK libspdk_event_keyring.so 00:15:17.689 SYMLINK libspdk_event_sock.so 00:15:17.689 SYMLINK libspdk_event_fsdev.so 00:15:17.689 SYMLINK libspdk_event_scheduler.so 00:15:17.689 SYMLINK libspdk_event_vhost_blk.so 00:15:17.689 SYMLINK libspdk_event_iobuf.so 00:15:17.689 SYMLINK libspdk_event_vmd.so 00:15:17.946 CC module/event/subsystems/accel/accel.o 00:15:18.204 LIB libspdk_event_accel.a 00:15:18.204 SO libspdk_event_accel.so.6.0 00:15:18.204 SYMLINK libspdk_event_accel.so 00:15:18.462 CC module/event/subsystems/bdev/bdev.o 00:15:18.719 LIB libspdk_event_bdev.a 00:15:18.719 SO libspdk_event_bdev.so.6.0 00:15:18.719 SYMLINK libspdk_event_bdev.so 00:15:18.977 CC module/event/subsystems/nbd/nbd.o 00:15:18.977 CC module/event/subsystems/scsi/scsi.o 00:15:18.977 CC module/event/subsystems/ublk/ublk.o 00:15:18.977 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:15:18.977 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:15:18.977 LIB libspdk_event_nbd.a 00:15:18.977 LIB libspdk_event_ublk.a 00:15:18.977 SO libspdk_event_nbd.so.6.0 00:15:18.977 LIB libspdk_event_scsi.a 00:15:18.977 SO libspdk_event_ublk.so.3.0 00:15:18.977 SO libspdk_event_scsi.so.6.0 00:15:18.977 SYMLINK libspdk_event_nbd.so 00:15:19.234 LIB libspdk_event_nvmf.a 00:15:19.234 SYMLINK libspdk_event_scsi.so 00:15:19.234 SYMLINK libspdk_event_ublk.so 00:15:19.234 SO libspdk_event_nvmf.so.6.0 00:15:19.234 SYMLINK libspdk_event_nvmf.so 00:15:19.234 CC module/event/subsystems/iscsi/iscsi.o 00:15:19.234 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:15:19.492 LIB libspdk_event_vhost_scsi.a 00:15:19.492 SO libspdk_event_vhost_scsi.so.3.0 00:15:19.492 LIB libspdk_event_iscsi.a 00:15:19.492 SO libspdk_event_iscsi.so.6.0 00:15:19.492 SYMLINK libspdk_event_vhost_scsi.so 00:15:19.492 SYMLINK libspdk_event_iscsi.so 00:15:19.750 SO libspdk.so.6.0 00:15:19.750 SYMLINK libspdk.so 00:15:19.750 CC app/trace_record/trace_record.o 00:15:19.750 CXX app/trace/trace.o 00:15:20.008 CC app/spdk_nvme_perf/perf.o 00:15:20.008 CC app/spdk_lspci/spdk_lspci.o 00:15:20.008 CC app/iscsi_tgt/iscsi_tgt.o 00:15:20.008 CC app/nvmf_tgt/nvmf_main.o 00:15:20.008 CC app/spdk_tgt/spdk_tgt.o 00:15:20.008 CC test/thread/poller_perf/poller_perf.o 00:15:20.008 CC examples/ioat/perf/perf.o 00:15:20.008 CC examples/util/zipf/zipf.o 00:15:20.008 LINK spdk_lspci 00:15:20.008 LINK iscsi_tgt 00:15:20.008 LINK poller_perf 00:15:20.008 LINK nvmf_tgt 00:15:20.008 LINK zipf 00:15:20.265 LINK spdk_tgt 00:15:20.265 LINK spdk_trace_record 00:15:20.265 LINK ioat_perf 00:15:20.265 CC app/spdk_nvme_identify/identify.o 00:15:20.265 LINK spdk_trace 00:15:20.265 TEST_HEADER include/spdk/accel.h 00:15:20.265 TEST_HEADER include/spdk/accel_module.h 00:15:20.265 TEST_HEADER include/spdk/assert.h 00:15:20.265 TEST_HEADER include/spdk/barrier.h 00:15:20.265 TEST_HEADER include/spdk/base64.h 00:15:20.265 TEST_HEADER include/spdk/bdev.h 00:15:20.265 TEST_HEADER include/spdk/bdev_module.h 00:15:20.265 TEST_HEADER include/spdk/bdev_zone.h 00:15:20.265 TEST_HEADER include/spdk/bit_array.h 00:15:20.265 TEST_HEADER include/spdk/bit_pool.h 00:15:20.265 TEST_HEADER include/spdk/blob_bdev.h 00:15:20.265 TEST_HEADER include/spdk/blobfs_bdev.h 00:15:20.265 TEST_HEADER include/spdk/blobfs.h 00:15:20.265 TEST_HEADER include/spdk/blob.h 00:15:20.265 TEST_HEADER include/spdk/conf.h 00:15:20.265 TEST_HEADER include/spdk/config.h 00:15:20.265 TEST_HEADER include/spdk/cpuset.h 00:15:20.265 TEST_HEADER include/spdk/crc16.h 00:15:20.265 TEST_HEADER include/spdk/crc32.h 00:15:20.265 CC app/spdk_nvme_discover/discovery_aer.o 00:15:20.265 TEST_HEADER include/spdk/crc64.h 00:15:20.265 TEST_HEADER include/spdk/dif.h 00:15:20.265 TEST_HEADER include/spdk/dma.h 00:15:20.265 TEST_HEADER include/spdk/endian.h 00:15:20.265 TEST_HEADER include/spdk/env_dpdk.h 00:15:20.265 TEST_HEADER include/spdk/env.h 00:15:20.265 TEST_HEADER include/spdk/event.h 00:15:20.265 TEST_HEADER include/spdk/fd_group.h 00:15:20.265 TEST_HEADER include/spdk/fd.h 00:15:20.265 TEST_HEADER include/spdk/file.h 00:15:20.524 TEST_HEADER include/spdk/fsdev.h 00:15:20.524 TEST_HEADER include/spdk/fsdev_module.h 00:15:20.524 TEST_HEADER include/spdk/ftl.h 00:15:20.524 TEST_HEADER include/spdk/fuse_dispatcher.h 00:15:20.524 TEST_HEADER include/spdk/gpt_spec.h 00:15:20.524 TEST_HEADER include/spdk/hexlify.h 00:15:20.524 CC test/dma/test_dma/test_dma.o 00:15:20.524 CC app/spdk_top/spdk_top.o 00:15:20.524 TEST_HEADER include/spdk/histogram_data.h 00:15:20.524 CC examples/ioat/verify/verify.o 00:15:20.524 TEST_HEADER include/spdk/idxd.h 00:15:20.524 TEST_HEADER include/spdk/idxd_spec.h 00:15:20.524 TEST_HEADER include/spdk/init.h 00:15:20.524 TEST_HEADER include/spdk/ioat.h 00:15:20.524 TEST_HEADER include/spdk/ioat_spec.h 00:15:20.524 TEST_HEADER include/spdk/iscsi_spec.h 00:15:20.524 TEST_HEADER include/spdk/json.h 00:15:20.524 TEST_HEADER include/spdk/jsonrpc.h 00:15:20.524 TEST_HEADER include/spdk/keyring.h 00:15:20.524 TEST_HEADER include/spdk/keyring_module.h 00:15:20.524 TEST_HEADER include/spdk/likely.h 00:15:20.524 CC test/app/bdev_svc/bdev_svc.o 00:15:20.524 TEST_HEADER include/spdk/log.h 00:15:20.524 TEST_HEADER include/spdk/lvol.h 00:15:20.524 TEST_HEADER include/spdk/md5.h 00:15:20.524 TEST_HEADER include/spdk/memory.h 00:15:20.524 TEST_HEADER include/spdk/mmio.h 00:15:20.524 TEST_HEADER include/spdk/nbd.h 00:15:20.524 TEST_HEADER include/spdk/net.h 00:15:20.524 TEST_HEADER include/spdk/notify.h 00:15:20.524 TEST_HEADER include/spdk/nvme.h 00:15:20.524 TEST_HEADER include/spdk/nvme_intel.h 00:15:20.524 TEST_HEADER include/spdk/nvme_ocssd.h 00:15:20.524 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:15:20.524 TEST_HEADER include/spdk/nvme_spec.h 00:15:20.524 TEST_HEADER include/spdk/nvme_zns.h 00:15:20.524 TEST_HEADER include/spdk/nvmf_cmd.h 00:15:20.524 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:15:20.524 TEST_HEADER include/spdk/nvmf.h 00:15:20.524 TEST_HEADER include/spdk/nvmf_spec.h 00:15:20.524 TEST_HEADER include/spdk/nvmf_transport.h 00:15:20.524 TEST_HEADER include/spdk/opal.h 00:15:20.524 TEST_HEADER include/spdk/opal_spec.h 00:15:20.524 TEST_HEADER include/spdk/pci_ids.h 00:15:20.524 TEST_HEADER include/spdk/pipe.h 00:15:20.524 TEST_HEADER include/spdk/queue.h 00:15:20.524 TEST_HEADER include/spdk/reduce.h 00:15:20.524 TEST_HEADER include/spdk/rpc.h 00:15:20.524 TEST_HEADER include/spdk/scheduler.h 00:15:20.524 TEST_HEADER include/spdk/scsi.h 00:15:20.524 TEST_HEADER include/spdk/scsi_spec.h 00:15:20.524 TEST_HEADER include/spdk/sock.h 00:15:20.524 TEST_HEADER include/spdk/stdinc.h 00:15:20.524 TEST_HEADER include/spdk/string.h 00:15:20.524 TEST_HEADER include/spdk/thread.h 00:15:20.524 CC test/env/mem_callbacks/mem_callbacks.o 00:15:20.524 TEST_HEADER include/spdk/trace.h 00:15:20.524 TEST_HEADER include/spdk/trace_parser.h 00:15:20.524 TEST_HEADER include/spdk/tree.h 00:15:20.524 TEST_HEADER include/spdk/ublk.h 00:15:20.524 TEST_HEADER include/spdk/util.h 00:15:20.524 TEST_HEADER include/spdk/uuid.h 00:15:20.524 TEST_HEADER include/spdk/version.h 00:15:20.524 TEST_HEADER include/spdk/vfio_user_pci.h 00:15:20.524 TEST_HEADER include/spdk/vfio_user_spec.h 00:15:20.524 TEST_HEADER include/spdk/vhost.h 00:15:20.524 TEST_HEADER include/spdk/vmd.h 00:15:20.524 TEST_HEADER include/spdk/xor.h 00:15:20.524 TEST_HEADER include/spdk/zipf.h 00:15:20.524 CXX test/cpp_headers/accel.o 00:15:20.524 LINK spdk_nvme_discover 00:15:20.524 LINK spdk_nvme_perf 00:15:20.524 CC test/event/event_perf/event_perf.o 00:15:20.782 LINK bdev_svc 00:15:20.782 LINK verify 00:15:20.782 CXX test/cpp_headers/accel_module.o 00:15:20.782 LINK event_perf 00:15:20.782 CC test/rpc_client/rpc_client_test.o 00:15:20.782 CC test/env/vtophys/vtophys.o 00:15:20.782 CXX test/cpp_headers/assert.o 00:15:21.040 LINK test_dma 00:15:21.040 CC examples/interrupt_tgt/interrupt_tgt.o 00:15:21.040 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:15:21.040 LINK vtophys 00:15:21.040 CC test/event/reactor/reactor.o 00:15:21.040 LINK rpc_client_test 00:15:21.040 CXX test/cpp_headers/barrier.o 00:15:21.040 LINK mem_callbacks 00:15:21.040 LINK spdk_nvme_identify 00:15:21.040 LINK interrupt_tgt 00:15:21.040 CXX test/cpp_headers/base64.o 00:15:21.040 CXX test/cpp_headers/bdev.o 00:15:21.299 LINK reactor 00:15:21.299 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:15:21.299 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:15:21.299 CC test/event/reactor_perf/reactor_perf.o 00:15:21.299 CXX test/cpp_headers/bdev_module.o 00:15:21.300 CC test/event/app_repeat/app_repeat.o 00:15:21.300 LINK env_dpdk_post_init 00:15:21.300 LINK nvme_fuzz 00:15:21.300 LINK reactor_perf 00:15:21.557 CC test/event/scheduler/scheduler.o 00:15:21.557 CC app/spdk_dd/spdk_dd.o 00:15:21.557 LINK spdk_top 00:15:21.557 CC examples/thread/thread/thread_ex.o 00:15:21.557 LINK app_repeat 00:15:21.557 CXX test/cpp_headers/bdev_zone.o 00:15:21.557 CC test/env/memory/memory_ut.o 00:15:21.557 LINK scheduler 00:15:21.557 CC app/vhost/vhost.o 00:15:21.815 CXX test/cpp_headers/bit_array.o 00:15:21.815 CC test/app/histogram_perf/histogram_perf.o 00:15:21.815 CC app/fio/nvme/fio_plugin.o 00:15:21.815 LINK thread 00:15:21.815 CC test/app/jsoncat/jsoncat.o 00:15:21.815 LINK spdk_dd 00:15:21.815 CXX test/cpp_headers/bit_pool.o 00:15:21.815 LINK histogram_perf 00:15:21.815 LINK vhost 00:15:21.815 LINK jsoncat 00:15:21.815 CC app/fio/bdev/fio_plugin.o 00:15:22.072 CXX test/cpp_headers/blob_bdev.o 00:15:22.072 CXX test/cpp_headers/blobfs_bdev.o 00:15:22.072 CXX test/cpp_headers/blobfs.o 00:15:22.072 CC examples/sock/hello_world/hello_sock.o 00:15:22.072 CC test/app/stub/stub.o 00:15:22.072 CXX test/cpp_headers/blob.o 00:15:22.072 CC test/env/pci/pci_ut.o 00:15:22.072 CXX test/cpp_headers/conf.o 00:15:22.330 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:15:22.330 LINK stub 00:15:22.330 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:15:22.330 CXX test/cpp_headers/config.o 00:15:22.330 CXX test/cpp_headers/cpuset.o 00:15:22.330 LINK spdk_nvme 00:15:22.330 LINK hello_sock 00:15:22.330 LINK spdk_bdev 00:15:22.589 CXX test/cpp_headers/crc16.o 00:15:22.589 LINK pci_ut 00:15:22.589 CC test/accel/dif/dif.o 00:15:22.589 CC test/blobfs/mkfs/mkfs.o 00:15:22.589 CC examples/vmd/lsvmd/lsvmd.o 00:15:22.589 CC test/lvol/esnap/esnap.o 00:15:22.589 CXX test/cpp_headers/crc32.o 00:15:22.589 CC examples/idxd/perf/perf.o 00:15:22.589 LINK vhost_fuzz 00:15:22.846 LINK memory_ut 00:15:22.846 LINK lsvmd 00:15:22.846 CXX test/cpp_headers/crc64.o 00:15:22.846 LINK mkfs 00:15:22.846 CC examples/vmd/led/led.o 00:15:22.846 CXX test/cpp_headers/dif.o 00:15:22.846 CXX test/cpp_headers/dma.o 00:15:23.117 LINK led 00:15:23.117 CC examples/fsdev/hello_world/hello_fsdev.o 00:15:23.117 LINK iscsi_fuzz 00:15:23.117 CC test/nvme/aer/aer.o 00:15:23.117 LINK idxd_perf 00:15:23.117 CC examples/accel/perf/accel_perf.o 00:15:23.117 CXX test/cpp_headers/endian.o 00:15:23.117 CC test/nvme/reset/reset.o 00:15:23.117 CC test/nvme/sgl/sgl.o 00:15:23.117 CXX test/cpp_headers/env_dpdk.o 00:15:23.378 LINK hello_fsdev 00:15:23.378 LINK dif 00:15:23.378 LINK aer 00:15:23.378 CC examples/blob/hello_world/hello_blob.o 00:15:23.378 CC examples/blob/cli/blobcli.o 00:15:23.378 CXX test/cpp_headers/env.o 00:15:23.378 LINK reset 00:15:23.378 LINK sgl 00:15:23.378 LINK hello_blob 00:15:23.637 CXX test/cpp_headers/event.o 00:15:23.637 CC test/nvme/e2edp/nvme_dp.o 00:15:23.637 CXX test/cpp_headers/fd_group.o 00:15:23.637 CC test/nvme/overhead/overhead.o 00:15:23.637 LINK accel_perf 00:15:23.637 CC examples/nvme/hello_world/hello_world.o 00:15:23.637 CXX test/cpp_headers/fd.o 00:15:23.637 CXX test/cpp_headers/file.o 00:15:23.637 CC test/nvme/err_injection/err_injection.o 00:15:23.637 CC test/nvme/startup/startup.o 00:15:23.637 CC test/nvme/reserve/reserve.o 00:15:23.895 CC test/nvme/simple_copy/simple_copy.o 00:15:23.895 LINK hello_world 00:15:23.895 LINK nvme_dp 00:15:23.895 LINK overhead 00:15:23.895 LINK blobcli 00:15:23.895 CXX test/cpp_headers/fsdev.o 00:15:23.895 LINK startup 00:15:23.895 LINK err_injection 00:15:23.895 LINK reserve 00:15:23.895 LINK simple_copy 00:15:23.895 CC examples/nvme/reconnect/reconnect.o 00:15:24.153 CXX test/cpp_headers/fsdev_module.o 00:15:24.153 CC test/nvme/connect_stress/connect_stress.o 00:15:24.153 CC test/nvme/boot_partition/boot_partition.o 00:15:24.153 CC examples/bdev/hello_world/hello_bdev.o 00:15:24.153 CC examples/bdev/bdevperf/bdevperf.o 00:15:24.153 CC test/nvme/compliance/nvme_compliance.o 00:15:24.153 CXX test/cpp_headers/ftl.o 00:15:24.153 LINK connect_stress 00:15:24.153 CC test/nvme/fused_ordering/fused_ordering.o 00:15:24.153 LINK boot_partition 00:15:24.153 CC test/bdev/bdevio/bdevio.o 00:15:24.413 LINK hello_bdev 00:15:24.413 LINK reconnect 00:15:24.413 CXX test/cpp_headers/fuse_dispatcher.o 00:15:24.413 LINK fused_ordering 00:15:24.413 CC test/nvme/doorbell_aers/doorbell_aers.o 00:15:24.413 CC test/nvme/fdp/fdp.o 00:15:24.413 LINK nvme_compliance 00:15:24.413 CXX test/cpp_headers/gpt_spec.o 00:15:24.671 CC examples/nvme/nvme_manage/nvme_manage.o 00:15:24.671 CC examples/nvme/arbitration/arbitration.o 00:15:24.671 LINK doorbell_aers 00:15:24.671 CXX test/cpp_headers/hexlify.o 00:15:24.671 CC test/nvme/cuse/cuse.o 00:15:24.671 LINK bdevio 00:15:24.671 CXX test/cpp_headers/histogram_data.o 00:15:24.671 CXX test/cpp_headers/idxd.o 00:15:24.671 LINK fdp 00:15:24.930 CC examples/nvme/hotplug/hotplug.o 00:15:24.930 CC examples/nvme/cmb_copy/cmb_copy.o 00:15:24.930 CXX test/cpp_headers/idxd_spec.o 00:15:24.930 CXX test/cpp_headers/init.o 00:15:24.930 CC examples/nvme/abort/abort.o 00:15:24.930 LINK arbitration 00:15:24.930 LINK bdevperf 00:15:25.189 CXX test/cpp_headers/ioat.o 00:15:25.189 LINK cmb_copy 00:15:25.189 LINK hotplug 00:15:25.189 CXX test/cpp_headers/ioat_spec.o 00:15:25.189 CXX test/cpp_headers/iscsi_spec.o 00:15:25.189 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:15:25.189 LINK nvme_manage 00:15:25.189 CXX test/cpp_headers/json.o 00:15:25.189 CXX test/cpp_headers/jsonrpc.o 00:15:25.189 CXX test/cpp_headers/keyring.o 00:15:25.189 CXX test/cpp_headers/keyring_module.o 00:15:25.189 CXX test/cpp_headers/likely.o 00:15:25.189 LINK pmr_persistence 00:15:25.189 LINK abort 00:15:25.447 CXX test/cpp_headers/log.o 00:15:25.447 CXX test/cpp_headers/lvol.o 00:15:25.447 CXX test/cpp_headers/md5.o 00:15:25.447 CXX test/cpp_headers/memory.o 00:15:25.447 CXX test/cpp_headers/mmio.o 00:15:25.447 CXX test/cpp_headers/nbd.o 00:15:25.447 CXX test/cpp_headers/net.o 00:15:25.447 CXX test/cpp_headers/notify.o 00:15:25.447 CXX test/cpp_headers/nvme.o 00:15:25.447 CXX test/cpp_headers/nvme_intel.o 00:15:25.447 CXX test/cpp_headers/nvme_ocssd.o 00:15:25.447 CXX test/cpp_headers/nvme_ocssd_spec.o 00:15:25.447 CXX test/cpp_headers/nvme_spec.o 00:15:25.705 CXX test/cpp_headers/nvme_zns.o 00:15:25.705 CXX test/cpp_headers/nvmf_cmd.o 00:15:25.705 CC examples/nvmf/nvmf/nvmf.o 00:15:25.705 CXX test/cpp_headers/nvmf_fc_spec.o 00:15:25.705 CXX test/cpp_headers/nvmf.o 00:15:25.705 CXX test/cpp_headers/nvmf_spec.o 00:15:25.705 CXX test/cpp_headers/nvmf_transport.o 00:15:25.705 CXX test/cpp_headers/opal.o 00:15:25.705 CXX test/cpp_headers/opal_spec.o 00:15:25.705 CXX test/cpp_headers/pci_ids.o 00:15:25.964 CXX test/cpp_headers/pipe.o 00:15:25.964 CXX test/cpp_headers/queue.o 00:15:25.964 CXX test/cpp_headers/reduce.o 00:15:25.964 CXX test/cpp_headers/rpc.o 00:15:25.964 CXX test/cpp_headers/scheduler.o 00:15:25.964 CXX test/cpp_headers/scsi.o 00:15:25.964 CXX test/cpp_headers/scsi_spec.o 00:15:25.964 CXX test/cpp_headers/sock.o 00:15:25.964 LINK nvmf 00:15:25.964 CXX test/cpp_headers/stdinc.o 00:15:25.964 LINK cuse 00:15:25.964 CXX test/cpp_headers/string.o 00:15:25.964 CXX test/cpp_headers/thread.o 00:15:25.964 CXX test/cpp_headers/trace.o 00:15:25.964 CXX test/cpp_headers/trace_parser.o 00:15:25.964 CXX test/cpp_headers/tree.o 00:15:25.964 CXX test/cpp_headers/ublk.o 00:15:25.964 CXX test/cpp_headers/util.o 00:15:26.222 CXX test/cpp_headers/uuid.o 00:15:26.222 CXX test/cpp_headers/version.o 00:15:26.222 CXX test/cpp_headers/vfio_user_pci.o 00:15:26.222 CXX test/cpp_headers/vfio_user_spec.o 00:15:26.222 CXX test/cpp_headers/vhost.o 00:15:26.222 CXX test/cpp_headers/vmd.o 00:15:26.222 CXX test/cpp_headers/xor.o 00:15:26.222 CXX test/cpp_headers/zipf.o 00:15:28.149 LINK esnap 00:15:28.407 00:15:28.407 real 1m19.027s 00:15:28.407 user 6m57.692s 00:15:28.407 sys 1m16.461s 00:15:28.407 17:03:04 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:15:28.407 17:03:04 make -- common/autotest_common.sh@10 -- $ set +x 00:15:28.407 ************************************ 00:15:28.407 END TEST make 00:15:28.407 ************************************ 00:15:28.407 17:03:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:15:28.407 17:03:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:15:28.407 17:03:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:15:28.407 17:03:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.407 17:03:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:15:28.407 17:03:05 -- pm/common@44 -- $ pid=5035 00:15:28.407 17:03:05 -- pm/common@50 -- $ kill -TERM 5035 00:15:28.407 17:03:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.407 17:03:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:15:28.407 17:03:05 -- pm/common@44 -- $ pid=5036 00:15:28.407 17:03:05 -- pm/common@50 -- $ kill -TERM 5036 00:15:28.407 17:03:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:15:28.407 17:03:05 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:28.407 17:03:05 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:28.407 17:03:05 -- common/autotest_common.sh@1691 -- # lcov --version 00:15:28.407 17:03:05 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:28.665 17:03:05 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:28.665 17:03:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.665 17:03:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.665 17:03:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.665 17:03:05 -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.665 17:03:05 -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.665 17:03:05 -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.665 17:03:05 -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.666 17:03:05 -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.666 17:03:05 -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.666 17:03:05 -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.666 17:03:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.666 17:03:05 -- scripts/common.sh@344 -- # case "$op" in 00:15:28.666 17:03:05 -- scripts/common.sh@345 -- # : 1 00:15:28.666 17:03:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.666 17:03:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.666 17:03:05 -- scripts/common.sh@365 -- # decimal 1 00:15:28.666 17:03:05 -- scripts/common.sh@353 -- # local d=1 00:15:28.666 17:03:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.666 17:03:05 -- scripts/common.sh@355 -- # echo 1 00:15:28.666 17:03:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.666 17:03:05 -- scripts/common.sh@366 -- # decimal 2 00:15:28.666 17:03:05 -- scripts/common.sh@353 -- # local d=2 00:15:28.666 17:03:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.666 17:03:05 -- scripts/common.sh@355 -- # echo 2 00:15:28.666 17:03:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.666 17:03:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.666 17:03:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.666 17:03:05 -- scripts/common.sh@368 -- # return 0 00:15:28.666 17:03:05 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.666 17:03:05 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:28.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.666 --rc genhtml_branch_coverage=1 00:15:28.666 --rc genhtml_function_coverage=1 00:15:28.666 --rc genhtml_legend=1 00:15:28.666 --rc geninfo_all_blocks=1 00:15:28.666 --rc geninfo_unexecuted_blocks=1 00:15:28.666 00:15:28.666 ' 00:15:28.666 17:03:05 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:28.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.666 --rc genhtml_branch_coverage=1 00:15:28.666 --rc genhtml_function_coverage=1 00:15:28.666 --rc genhtml_legend=1 00:15:28.666 --rc geninfo_all_blocks=1 00:15:28.666 --rc geninfo_unexecuted_blocks=1 00:15:28.666 00:15:28.666 ' 00:15:28.666 17:03:05 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:28.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.666 --rc genhtml_branch_coverage=1 00:15:28.666 --rc genhtml_function_coverage=1 00:15:28.666 --rc genhtml_legend=1 00:15:28.666 --rc geninfo_all_blocks=1 00:15:28.666 --rc geninfo_unexecuted_blocks=1 00:15:28.666 00:15:28.666 ' 00:15:28.666 17:03:05 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:28.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.666 --rc genhtml_branch_coverage=1 00:15:28.666 --rc genhtml_function_coverage=1 00:15:28.666 --rc genhtml_legend=1 00:15:28.666 --rc geninfo_all_blocks=1 00:15:28.666 --rc geninfo_unexecuted_blocks=1 00:15:28.666 00:15:28.666 ' 00:15:28.666 17:03:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.666 17:03:05 -- nvmf/common.sh@7 -- # uname -s 00:15:28.666 17:03:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.666 17:03:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.666 17:03:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.666 17:03:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.666 17:03:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.666 17:03:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.666 17:03:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.666 17:03:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.666 17:03:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.666 17:03:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.666 17:03:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:046bd5e6-ee68-47d1-b992-5bf814ef401d 00:15:28.666 17:03:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=046bd5e6-ee68-47d1-b992-5bf814ef401d 00:15:28.666 17:03:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.666 17:03:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.666 17:03:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:28.666 17:03:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.666 17:03:05 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.666 17:03:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.666 17:03:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.666 17:03:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.666 17:03:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.666 17:03:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.666 17:03:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.666 17:03:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.666 17:03:05 -- paths/export.sh@5 -- # export PATH 00:15:28.666 17:03:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.666 17:03:05 -- nvmf/common.sh@51 -- # : 0 00:15:28.666 17:03:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.666 17:03:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.666 17:03:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.666 17:03:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.666 17:03:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.666 17:03:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.666 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.666 17:03:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.666 17:03:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.666 17:03:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.666 17:03:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:15:28.666 17:03:05 -- spdk/autotest.sh@32 -- # uname -s 00:15:28.666 17:03:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:15:28.666 17:03:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:15:28.666 17:03:05 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:28.666 17:03:05 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:15:28.666 17:03:05 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:28.666 17:03:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:15:28.666 17:03:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:15:28.666 17:03:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:15:28.666 17:03:05 -- spdk/autotest.sh@48 -- # udevadm_pid=53865 00:15:28.666 17:03:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:15:28.666 17:03:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:15:28.666 17:03:05 -- pm/common@17 -- # local monitor 00:15:28.666 17:03:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.666 17:03:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.666 17:03:05 -- pm/common@25 -- # sleep 1 00:15:28.666 17:03:05 -- pm/common@21 -- # date +%s 00:15:28.666 17:03:05 -- pm/common@21 -- # date +%s 00:15:28.666 17:03:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731085385 00:15:28.666 17:03:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731085385 00:15:28.666 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731085385_collect-vmstat.pm.log 00:15:28.666 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731085385_collect-cpu-load.pm.log 00:15:29.603 17:03:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:15:29.603 17:03:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:15:29.603 17:03:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.603 17:03:06 -- common/autotest_common.sh@10 -- # set +x 00:15:29.603 17:03:06 -- spdk/autotest.sh@59 -- # create_test_list 00:15:29.603 17:03:06 -- common/autotest_common.sh@750 -- # xtrace_disable 00:15:29.603 17:03:06 -- common/autotest_common.sh@10 -- # set +x 00:15:29.603 17:03:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:15:29.603 17:03:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:15:29.603 17:03:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:15:29.603 17:03:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:15:29.603 17:03:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:15:29.603 17:03:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:15:29.603 17:03:06 -- common/autotest_common.sh@1455 -- # uname 00:15:29.603 17:03:06 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:15:29.603 17:03:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:15:29.603 17:03:06 -- common/autotest_common.sh@1475 -- # uname 00:15:29.603 17:03:06 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:15:29.603 17:03:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:15:29.603 17:03:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:15:29.861 lcov: LCOV version 1.15 00:15:29.861 17:03:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:15:44.740 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:15:44.740 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:15:59.608 17:03:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:15:59.608 17:03:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.608 17:03:35 -- common/autotest_common.sh@10 -- # set +x 00:15:59.608 17:03:35 -- spdk/autotest.sh@78 -- # rm -f 00:15:59.608 17:03:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:59.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:59.608 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:59.608 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:59.608 17:03:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:15:59.608 17:03:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:59.608 17:03:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:59.608 17:03:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:59.608 17:03:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:59.608 17:03:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:59.608 17:03:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:59.608 17:03:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:59.608 17:03:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:59.608 17:03:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:59.608 17:03:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:59.608 17:03:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:59.608 17:03:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:59.608 17:03:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:59.608 17:03:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:59.608 17:03:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:15:59.608 17:03:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:15:59.608 17:03:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:15:59.608 17:03:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:59.608 17:03:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:59.608 17:03:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:15:59.608 17:03:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:15:59.608 17:03:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:15:59.608 17:03:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:59.608 17:03:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:15:59.608 17:03:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:59.609 17:03:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:59.609 17:03:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:15:59.609 17:03:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:59.609 17:03:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:59.609 No valid GPT data, bailing 00:15:59.609 17:03:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:59.609 17:03:36 -- scripts/common.sh@394 -- # pt= 00:15:59.609 17:03:36 -- scripts/common.sh@395 -- # return 1 00:15:59.609 17:03:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:15:59.609 1+0 records in 00:15:59.609 1+0 records out 00:15:59.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511996 s, 205 MB/s 00:15:59.609 17:03:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:59.609 17:03:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:59.609 17:03:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:15:59.609 17:03:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:15:59.609 17:03:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:15:59.867 No valid GPT data, bailing 00:15:59.867 17:03:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:59.867 17:03:36 -- scripts/common.sh@394 -- # pt= 00:15:59.867 17:03:36 -- scripts/common.sh@395 -- # return 1 00:15:59.867 17:03:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:15:59.867 1+0 records in 00:15:59.867 1+0 records out 00:15:59.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481283 s, 218 MB/s 00:15:59.867 17:03:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:59.867 17:03:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:59.867 17:03:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:15:59.867 17:03:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:15:59.867 17:03:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:15:59.867 No valid GPT data, bailing 00:15:59.867 17:03:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:15:59.867 17:03:36 -- scripts/common.sh@394 -- # pt= 00:15:59.867 17:03:36 -- scripts/common.sh@395 -- # return 1 00:15:59.867 17:03:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:15:59.867 1+0 records in 00:15:59.867 1+0 records out 00:15:59.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472096 s, 222 MB/s 00:15:59.867 17:03:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:15:59.867 17:03:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:15:59.867 17:03:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:15:59.867 17:03:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:15:59.867 17:03:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:15:59.867 No valid GPT data, bailing 00:15:59.867 17:03:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:15:59.867 17:03:36 -- scripts/common.sh@394 -- # pt= 00:15:59.867 17:03:36 -- scripts/common.sh@395 -- # return 1 00:15:59.867 17:03:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:15:59.867 1+0 records in 00:15:59.867 1+0 records out 00:15:59.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491188 s, 213 MB/s 00:15:59.867 17:03:36 -- spdk/autotest.sh@105 -- # sync 00:16:00.125 17:03:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:16:00.125 17:03:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:16:00.125 17:03:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:16:02.027 17:03:38 -- spdk/autotest.sh@111 -- # uname -s 00:16:02.028 17:03:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:16:02.028 17:03:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:16:02.028 17:03:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:02.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:02.286 Hugepages 00:16:02.286 node hugesize free / total 00:16:02.286 node0 1048576kB 0 / 0 00:16:02.286 node0 2048kB 0 / 0 00:16:02.286 00:16:02.286 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:02.286 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:02.286 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:16:02.543 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:16:02.543 17:03:39 -- spdk/autotest.sh@117 -- # uname -s 00:16:02.543 17:03:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:16:02.543 17:03:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:16:02.543 17:03:39 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:03.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:03.120 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.120 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.378 17:03:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:16:04.368 17:03:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:16:04.368 17:03:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:16:04.368 17:03:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:16:04.368 17:03:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:16:04.368 17:03:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:16:04.368 17:03:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:16:04.368 17:03:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:04.368 17:03:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:16:04.368 17:03:40 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:04.368 17:03:40 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:16:04.368 17:03:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:16:04.368 17:03:40 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:04.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:04.626 Waiting for block devices as requested 00:16:04.626 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:04.884 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:04.884 17:03:41 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:16:04.884 17:03:41 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:16:04.884 17:03:41 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:04.884 17:03:41 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:16:04.885 17:03:41 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:04.885 17:03:41 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:04.885 17:03:41 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:16:04.885 17:03:41 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:16:04.885 17:03:41 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # grep oacs 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:16:04.885 17:03:41 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:16:04.885 17:03:41 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:16:04.885 17:03:41 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1541 -- # continue 00:16:04.885 17:03:41 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:16:04.885 17:03:41 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:16:04.885 17:03:41 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:04.885 17:03:41 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:16:04.885 17:03:41 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:04.885 17:03:41 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:04.885 17:03:41 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:16:04.885 17:03:41 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:16:04.885 17:03:41 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # grep oacs 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:16:04.885 17:03:41 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:16:04.885 17:03:41 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:16:04.885 17:03:41 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:16:04.885 17:03:41 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:16:04.885 17:03:41 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:16:04.885 17:03:41 -- common/autotest_common.sh@1541 -- # continue 00:16:04.885 17:03:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:16:04.885 17:03:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.885 17:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:04.885 17:03:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:16:04.885 17:03:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:04.885 17:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:04.885 17:03:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:05.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:05.707 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:05.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:05.707 17:03:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:16:05.707 17:03:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:05.707 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:05.707 17:03:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:16:05.707 17:03:42 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:16:05.707 17:03:42 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:16:05.707 17:03:42 -- common/autotest_common.sh@1561 -- # bdfs=() 00:16:05.707 17:03:42 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:16:05.707 17:03:42 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:16:05.707 17:03:42 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:16:05.707 17:03:42 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:16:05.707 17:03:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:16:05.707 17:03:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:16:05.707 17:03:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:05.707 17:03:42 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:05.707 17:03:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:16:05.965 17:03:42 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:16:05.965 17:03:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:16:05.965 17:03:42 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:16:05.965 17:03:42 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:16:05.966 17:03:42 -- common/autotest_common.sh@1564 -- # device=0x0010 00:16:05.966 17:03:42 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:05.966 17:03:42 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:16:05.966 17:03:42 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:16:05.966 17:03:42 -- common/autotest_common.sh@1564 -- # device=0x0010 00:16:05.966 17:03:42 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:05.966 17:03:42 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:16:05.966 17:03:42 -- common/autotest_common.sh@1570 -- # return 0 00:16:05.966 17:03:42 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:16:05.966 17:03:42 -- common/autotest_common.sh@1578 -- # return 0 00:16:05.966 17:03:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:16:05.966 17:03:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:16:05.966 17:03:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:16:05.966 17:03:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:16:05.966 17:03:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:16:05.966 17:03:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.966 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:05.966 17:03:42 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:16:05.966 17:03:42 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:05.966 17:03:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:05.966 17:03:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:05.966 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:05.966 ************************************ 00:16:05.966 START TEST env 00:16:05.966 ************************************ 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:05.966 * Looking for test storage... 00:16:05.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1691 -- # lcov --version 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:05.966 17:03:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.966 17:03:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.966 17:03:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.966 17:03:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.966 17:03:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.966 17:03:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.966 17:03:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.966 17:03:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.966 17:03:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.966 17:03:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.966 17:03:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.966 17:03:42 env -- scripts/common.sh@344 -- # case "$op" in 00:16:05.966 17:03:42 env -- scripts/common.sh@345 -- # : 1 00:16:05.966 17:03:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.966 17:03:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.966 17:03:42 env -- scripts/common.sh@365 -- # decimal 1 00:16:05.966 17:03:42 env -- scripts/common.sh@353 -- # local d=1 00:16:05.966 17:03:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.966 17:03:42 env -- scripts/common.sh@355 -- # echo 1 00:16:05.966 17:03:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.966 17:03:42 env -- scripts/common.sh@366 -- # decimal 2 00:16:05.966 17:03:42 env -- scripts/common.sh@353 -- # local d=2 00:16:05.966 17:03:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.966 17:03:42 env -- scripts/common.sh@355 -- # echo 2 00:16:05.966 17:03:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.966 17:03:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.966 17:03:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.966 17:03:42 env -- scripts/common.sh@368 -- # return 0 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:05.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.966 --rc genhtml_branch_coverage=1 00:16:05.966 --rc genhtml_function_coverage=1 00:16:05.966 --rc genhtml_legend=1 00:16:05.966 --rc geninfo_all_blocks=1 00:16:05.966 --rc geninfo_unexecuted_blocks=1 00:16:05.966 00:16:05.966 ' 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:05.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.966 --rc genhtml_branch_coverage=1 00:16:05.966 --rc genhtml_function_coverage=1 00:16:05.966 --rc genhtml_legend=1 00:16:05.966 --rc geninfo_all_blocks=1 00:16:05.966 --rc geninfo_unexecuted_blocks=1 00:16:05.966 00:16:05.966 ' 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:05.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.966 --rc genhtml_branch_coverage=1 00:16:05.966 --rc genhtml_function_coverage=1 00:16:05.966 --rc genhtml_legend=1 00:16:05.966 --rc geninfo_all_blocks=1 00:16:05.966 --rc geninfo_unexecuted_blocks=1 00:16:05.966 00:16:05.966 ' 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:05.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.966 --rc genhtml_branch_coverage=1 00:16:05.966 --rc genhtml_function_coverage=1 00:16:05.966 --rc genhtml_legend=1 00:16:05.966 --rc geninfo_all_blocks=1 00:16:05.966 --rc geninfo_unexecuted_blocks=1 00:16:05.966 00:16:05.966 ' 00:16:05.966 17:03:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:05.966 17:03:42 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:05.966 17:03:42 env -- common/autotest_common.sh@10 -- # set +x 00:16:05.966 ************************************ 00:16:05.966 START TEST env_memory 00:16:05.966 ************************************ 00:16:05.966 17:03:42 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:05.966 00:16:05.966 00:16:05.966 CUnit - A unit testing framework for C - Version 2.1-3 00:16:05.966 http://cunit.sourceforge.net/ 00:16:05.966 00:16:05.966 00:16:05.966 Suite: memory 00:16:06.224 Test: alloc and free memory map ...[2024-11-08 17:03:42.693513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:16:06.224 passed 00:16:06.224 Test: mem map translation ...[2024-11-08 17:03:42.732502] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:16:06.224 [2024-11-08 17:03:42.732650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:16:06.224 [2024-11-08 17:03:42.732769] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:16:06.224 [2024-11-08 17:03:42.732811] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:16:06.224 passed 00:16:06.224 Test: mem map registration ...[2024-11-08 17:03:42.801082] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:16:06.224 [2024-11-08 17:03:42.801225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:16:06.224 passed 00:16:06.224 Test: mem map adjacent registrations ...passed 00:16:06.224 00:16:06.224 Run Summary: Type Total Ran Passed Failed Inactive 00:16:06.224 suites 1 1 n/a 0 0 00:16:06.224 tests 4 4 4 0 0 00:16:06.224 asserts 152 152 152 0 n/a 00:16:06.224 00:16:06.224 Elapsed time = 0.233 seconds 00:16:06.224 00:16:06.224 real 0m0.266s 00:16:06.224 user 0m0.237s 00:16:06.224 sys 0m0.021s 00:16:06.224 17:03:42 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:06.224 17:03:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:16:06.224 ************************************ 00:16:06.224 END TEST env_memory 00:16:06.224 ************************************ 00:16:06.481 17:03:42 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:06.481 17:03:42 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:06.481 17:03:42 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:06.481 17:03:42 env -- common/autotest_common.sh@10 -- # set +x 00:16:06.481 ************************************ 00:16:06.481 START TEST env_vtophys 00:16:06.481 ************************************ 00:16:06.481 17:03:42 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:06.481 EAL: lib.eal log level changed from notice to debug 00:16:06.481 EAL: Detected lcore 0 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 1 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 2 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 3 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 4 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 5 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 6 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 7 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 8 as core 0 on socket 0 00:16:06.481 EAL: Detected lcore 9 as core 0 on socket 0 00:16:06.481 EAL: Maximum logical cores by configuration: 128 00:16:06.481 EAL: Detected CPU lcores: 10 00:16:06.481 EAL: Detected NUMA nodes: 1 00:16:06.481 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:16:06.481 EAL: Detected shared linkage of DPDK 00:16:06.481 EAL: No shared files mode enabled, IPC will be disabled 00:16:06.481 EAL: Selected IOVA mode 'PA' 00:16:06.481 EAL: Probing VFIO support... 00:16:06.481 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:06.481 EAL: VFIO modules not loaded, skipping VFIO support... 00:16:06.481 EAL: Ask a virtual area of 0x2e000 bytes 00:16:06.481 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:16:06.481 EAL: Setting up physically contiguous memory... 00:16:06.481 EAL: Setting maximum number of open files to 524288 00:16:06.481 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:16:06.481 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:16:06.481 EAL: Ask a virtual area of 0x61000 bytes 00:16:06.481 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:16:06.481 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:06.481 EAL: Ask a virtual area of 0x400000000 bytes 00:16:06.481 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:16:06.482 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:16:06.482 EAL: Ask a virtual area of 0x61000 bytes 00:16:06.482 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:16:06.482 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:06.482 EAL: Ask a virtual area of 0x400000000 bytes 00:16:06.482 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:16:06.482 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:16:06.482 EAL: Ask a virtual area of 0x61000 bytes 00:16:06.482 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:16:06.482 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:06.482 EAL: Ask a virtual area of 0x400000000 bytes 00:16:06.482 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:16:06.482 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:16:06.482 EAL: Ask a virtual area of 0x61000 bytes 00:16:06.482 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:16:06.482 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:06.482 EAL: Ask a virtual area of 0x400000000 bytes 00:16:06.482 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:16:06.482 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:16:06.482 EAL: Hugepages will be freed exactly as allocated. 00:16:06.482 EAL: No shared files mode enabled, IPC is disabled 00:16:06.482 EAL: No shared files mode enabled, IPC is disabled 00:16:06.482 EAL: TSC frequency is ~2600000 KHz 00:16:06.482 EAL: Main lcore 0 is ready (tid=7f8aa50daa40;cpuset=[0]) 00:16:06.482 EAL: Trying to obtain current memory policy. 00:16:06.482 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:06.482 EAL: Restoring previous memory policy: 0 00:16:06.482 EAL: request: mp_malloc_sync 00:16:06.482 EAL: No shared files mode enabled, IPC is disabled 00:16:06.482 EAL: Heap on socket 0 was expanded by 2MB 00:16:06.482 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:06.482 EAL: No PCI address specified using 'addr=' in: bus=pci 00:16:06.482 EAL: Mem event callback 'spdk:(nil)' registered 00:16:06.482 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:16:06.482 00:16:06.482 00:16:06.482 CUnit - A unit testing framework for C - Version 2.1-3 00:16:06.482 http://cunit.sourceforge.net/ 00:16:06.482 00:16:06.482 00:16:06.482 Suite: components_suite 00:16:07.047 Test: vtophys_malloc_test ...passed 00:16:07.047 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:16:07.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.047 EAL: Restoring previous memory policy: 4 00:16:07.047 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.047 EAL: request: mp_malloc_sync 00:16:07.047 EAL: No shared files mode enabled, IPC is disabled 00:16:07.047 EAL: Heap on socket 0 was expanded by 4MB 00:16:07.047 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.047 EAL: request: mp_malloc_sync 00:16:07.047 EAL: No shared files mode enabled, IPC is disabled 00:16:07.047 EAL: Heap on socket 0 was shrunk by 4MB 00:16:07.047 EAL: Trying to obtain current memory policy. 00:16:07.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.047 EAL: Restoring previous memory policy: 4 00:16:07.047 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.047 EAL: request: mp_malloc_sync 00:16:07.047 EAL: No shared files mode enabled, IPC is disabled 00:16:07.047 EAL: Heap on socket 0 was expanded by 6MB 00:16:07.047 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.047 EAL: request: mp_malloc_sync 00:16:07.047 EAL: No shared files mode enabled, IPC is disabled 00:16:07.047 EAL: Heap on socket 0 was shrunk by 6MB 00:16:07.047 EAL: Trying to obtain current memory policy. 00:16:07.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.047 EAL: Restoring previous memory policy: 4 00:16:07.047 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.047 EAL: request: mp_malloc_sync 00:16:07.047 EAL: No shared files mode enabled, IPC is disabled 00:16:07.048 EAL: Heap on socket 0 was expanded by 10MB 00:16:07.048 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.048 EAL: request: mp_malloc_sync 00:16:07.048 EAL: No shared files mode enabled, IPC is disabled 00:16:07.048 EAL: Heap on socket 0 was shrunk by 10MB 00:16:07.048 EAL: Trying to obtain current memory policy. 00:16:07.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.048 EAL: Restoring previous memory policy: 4 00:16:07.048 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.048 EAL: request: mp_malloc_sync 00:16:07.048 EAL: No shared files mode enabled, IPC is disabled 00:16:07.048 EAL: Heap on socket 0 was expanded by 18MB 00:16:07.048 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.048 EAL: request: mp_malloc_sync 00:16:07.048 EAL: No shared files mode enabled, IPC is disabled 00:16:07.048 EAL: Heap on socket 0 was shrunk by 18MB 00:16:07.048 EAL: Trying to obtain current memory policy. 00:16:07.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.048 EAL: Restoring previous memory policy: 4 00:16:07.048 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.048 EAL: request: mp_malloc_sync 00:16:07.048 EAL: No shared files mode enabled, IPC is disabled 00:16:07.048 EAL: Heap on socket 0 was expanded by 34MB 00:16:07.048 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.048 EAL: request: mp_malloc_sync 00:16:07.048 EAL: No shared files mode enabled, IPC is disabled 00:16:07.048 EAL: Heap on socket 0 was shrunk by 34MB 00:16:07.048 EAL: Trying to obtain current memory policy. 00:16:07.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.048 EAL: Restoring previous memory policy: 4 00:16:07.048 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.048 EAL: request: mp_malloc_sync 00:16:07.048 EAL: No shared files mode enabled, IPC is disabled 00:16:07.048 EAL: Heap on socket 0 was expanded by 66MB 00:16:07.306 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.306 EAL: request: mp_malloc_sync 00:16:07.306 EAL: No shared files mode enabled, IPC is disabled 00:16:07.306 EAL: Heap on socket 0 was shrunk by 66MB 00:16:07.306 EAL: Trying to obtain current memory policy. 00:16:07.306 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.306 EAL: Restoring previous memory policy: 4 00:16:07.306 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.306 EAL: request: mp_malloc_sync 00:16:07.306 EAL: No shared files mode enabled, IPC is disabled 00:16:07.306 EAL: Heap on socket 0 was expanded by 130MB 00:16:07.306 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.565 EAL: request: mp_malloc_sync 00:16:07.565 EAL: No shared files mode enabled, IPC is disabled 00:16:07.565 EAL: Heap on socket 0 was shrunk by 130MB 00:16:07.565 EAL: Trying to obtain current memory policy. 00:16:07.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:07.565 EAL: Restoring previous memory policy: 4 00:16:07.565 EAL: Calling mem event callback 'spdk:(nil)' 00:16:07.565 EAL: request: mp_malloc_sync 00:16:07.565 EAL: No shared files mode enabled, IPC is disabled 00:16:07.565 EAL: Heap on socket 0 was expanded by 258MB 00:16:07.824 EAL: Calling mem event callback 'spdk:(nil)' 00:16:08.082 EAL: request: mp_malloc_sync 00:16:08.082 EAL: No shared files mode enabled, IPC is disabled 00:16:08.082 EAL: Heap on socket 0 was shrunk by 258MB 00:16:08.340 EAL: Trying to obtain current memory policy. 00:16:08.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:08.340 EAL: Restoring previous memory policy: 4 00:16:08.340 EAL: Calling mem event callback 'spdk:(nil)' 00:16:08.340 EAL: request: mp_malloc_sync 00:16:08.340 EAL: No shared files mode enabled, IPC is disabled 00:16:08.340 EAL: Heap on socket 0 was expanded by 514MB 00:16:08.906 EAL: Calling mem event callback 'spdk:(nil)' 00:16:08.906 EAL: request: mp_malloc_sync 00:16:08.906 EAL: No shared files mode enabled, IPC is disabled 00:16:08.906 EAL: Heap on socket 0 was shrunk by 514MB 00:16:09.471 EAL: Trying to obtain current memory policy. 00:16:09.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:09.730 EAL: Restoring previous memory policy: 4 00:16:09.730 EAL: Calling mem event callback 'spdk:(nil)' 00:16:09.730 EAL: request: mp_malloc_sync 00:16:09.730 EAL: No shared files mode enabled, IPC is disabled 00:16:09.730 EAL: Heap on socket 0 was expanded by 1026MB 00:16:11.106 EAL: Calling mem event callback 'spdk:(nil)' 00:16:11.106 EAL: request: mp_malloc_sync 00:16:11.106 EAL: No shared files mode enabled, IPC is disabled 00:16:11.106 EAL: Heap on socket 0 was shrunk by 1026MB 00:16:12.040 passed 00:16:12.040 00:16:12.040 Run Summary: Type Total Ran Passed Failed Inactive 00:16:12.040 suites 1 1 n/a 0 0 00:16:12.040 tests 2 2 2 0 0 00:16:12.040 asserts 5726 5726 5726 0 n/a 00:16:12.040 00:16:12.040 Elapsed time = 5.391 seconds 00:16:12.040 EAL: Calling mem event callback 'spdk:(nil)' 00:16:12.040 EAL: request: mp_malloc_sync 00:16:12.040 EAL: No shared files mode enabled, IPC is disabled 00:16:12.040 EAL: Heap on socket 0 was shrunk by 2MB 00:16:12.040 EAL: No shared files mode enabled, IPC is disabled 00:16:12.040 EAL: No shared files mode enabled, IPC is disabled 00:16:12.040 EAL: No shared files mode enabled, IPC is disabled 00:16:12.040 00:16:12.040 real 0m5.672s 00:16:12.040 user 0m4.715s 00:16:12.040 sys 0m0.796s 00:16:12.040 ************************************ 00:16:12.040 END TEST env_vtophys 00:16:12.040 ************************************ 00:16:12.040 17:03:48 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.040 17:03:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:16:12.040 17:03:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:12.040 17:03:48 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:12.040 17:03:48 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.040 17:03:48 env -- common/autotest_common.sh@10 -- # set +x 00:16:12.040 ************************************ 00:16:12.040 START TEST env_pci 00:16:12.040 ************************************ 00:16:12.040 17:03:48 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:12.040 00:16:12.040 00:16:12.040 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.040 http://cunit.sourceforge.net/ 00:16:12.040 00:16:12.040 00:16:12.040 Suite: pci 00:16:12.040 Test: pci_hook ...[2024-11-08 17:03:48.743576] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56124 has claimed it 00:16:12.298 EAL: Cannot find device (10000:00:01.0) 00:16:12.298 EAL: Failed to attach device on primary process 00:16:12.298 passed 00:16:12.298 00:16:12.298 Run Summary: Type Total Ran Passed Failed Inactive 00:16:12.298 suites 1 1 n/a 0 0 00:16:12.298 tests 1 1 1 0 0 00:16:12.298 asserts 25 25 25 0 n/a 00:16:12.298 00:16:12.298 Elapsed time = 0.004 seconds 00:16:12.298 00:16:12.298 real 0m0.067s 00:16:12.298 user 0m0.033s 00:16:12.298 sys 0m0.032s 00:16:12.298 ************************************ 00:16:12.298 END TEST env_pci 00:16:12.298 ************************************ 00:16:12.298 17:03:48 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.298 17:03:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:16:12.298 17:03:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:16:12.299 17:03:48 env -- env/env.sh@15 -- # uname 00:16:12.299 17:03:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:16:12.299 17:03:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:16:12.299 17:03:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:12.299 17:03:48 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:12.299 17:03:48 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.299 17:03:48 env -- common/autotest_common.sh@10 -- # set +x 00:16:12.299 ************************************ 00:16:12.299 START TEST env_dpdk_post_init 00:16:12.299 ************************************ 00:16:12.299 17:03:48 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:12.299 EAL: Detected CPU lcores: 10 00:16:12.299 EAL: Detected NUMA nodes: 1 00:16:12.299 EAL: Detected shared linkage of DPDK 00:16:12.299 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:12.299 EAL: Selected IOVA mode 'PA' 00:16:12.558 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:12.558 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:16:12.558 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:16:12.558 Starting DPDK initialization... 00:16:12.558 Starting SPDK post initialization... 00:16:12.558 SPDK NVMe probe 00:16:12.558 Attaching to 0000:00:10.0 00:16:12.558 Attaching to 0000:00:11.0 00:16:12.558 Attached to 0000:00:10.0 00:16:12.558 Attached to 0000:00:11.0 00:16:12.558 Cleaning up... 00:16:12.558 ************************************ 00:16:12.558 END TEST env_dpdk_post_init 00:16:12.558 ************************************ 00:16:12.558 00:16:12.558 real 0m0.248s 00:16:12.558 user 0m0.078s 00:16:12.558 sys 0m0.071s 00:16:12.558 17:03:49 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.558 17:03:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.558 17:03:49 env -- env/env.sh@26 -- # uname 00:16:12.558 17:03:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:16:12.558 17:03:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:12.558 17:03:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:12.558 17:03:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.558 17:03:49 env -- common/autotest_common.sh@10 -- # set +x 00:16:12.558 ************************************ 00:16:12.558 START TEST env_mem_callbacks 00:16:12.558 ************************************ 00:16:12.558 17:03:49 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:12.558 EAL: Detected CPU lcores: 10 00:16:12.558 EAL: Detected NUMA nodes: 1 00:16:12.558 EAL: Detected shared linkage of DPDK 00:16:12.558 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:12.558 EAL: Selected IOVA mode 'PA' 00:16:12.816 00:16:12.816 00:16:12.816 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.816 http://cunit.sourceforge.net/ 00:16:12.816 00:16:12.816 00:16:12.816 Suite: memory 00:16:12.816 Test: test ... 00:16:12.816 register 0x200000200000 2097152 00:16:12.816 malloc 3145728 00:16:12.816 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:12.816 register 0x200000400000 4194304 00:16:12.816 buf 0x2000004fffc0 len 3145728 PASSED 00:16:12.816 malloc 64 00:16:12.816 buf 0x2000004ffec0 len 64 PASSED 00:16:12.816 malloc 4194304 00:16:12.816 register 0x200000800000 6291456 00:16:12.816 buf 0x2000009fffc0 len 4194304 PASSED 00:16:12.816 free 0x2000004fffc0 3145728 00:16:12.816 free 0x2000004ffec0 64 00:16:12.816 unregister 0x200000400000 4194304 PASSED 00:16:12.816 free 0x2000009fffc0 4194304 00:16:12.816 unregister 0x200000800000 6291456 PASSED 00:16:12.816 malloc 8388608 00:16:12.816 register 0x200000400000 10485760 00:16:12.816 buf 0x2000005fffc0 len 8388608 PASSED 00:16:12.816 free 0x2000005fffc0 8388608 00:16:12.816 unregister 0x200000400000 10485760 PASSED 00:16:12.816 passed 00:16:12.816 00:16:12.816 Run Summary: Type Total Ran Passed Failed Inactive 00:16:12.816 suites 1 1 n/a 0 0 00:16:12.816 tests 1 1 1 0 0 00:16:12.816 asserts 15 15 15 0 n/a 00:16:12.816 00:16:12.816 Elapsed time = 0.050 seconds 00:16:12.816 00:16:12.816 real 0m0.221s 00:16:12.816 user 0m0.074s 00:16:12.816 sys 0m0.044s 00:16:12.816 17:03:49 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.816 17:03:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:16:12.816 ************************************ 00:16:12.816 END TEST env_mem_callbacks 00:16:12.816 ************************************ 00:16:12.816 ************************************ 00:16:12.816 END TEST env 00:16:12.816 ************************************ 00:16:12.816 00:16:12.816 real 0m6.963s 00:16:12.816 user 0m5.313s 00:16:12.816 sys 0m1.177s 00:16:12.816 17:03:49 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.816 17:03:49 env -- common/autotest_common.sh@10 -- # set +x 00:16:12.816 17:03:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:12.816 17:03:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:12.816 17:03:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.816 17:03:49 -- common/autotest_common.sh@10 -- # set +x 00:16:12.816 ************************************ 00:16:12.816 START TEST rpc 00:16:12.816 ************************************ 00:16:12.816 17:03:49 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:13.074 * Looking for test storage... 00:16:13.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:16:13.074 17:03:49 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:13.074 17:03:49 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:13.074 17:03:49 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:13.074 17:03:49 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:13.074 17:03:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.074 17:03:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.074 17:03:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.074 17:03:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.074 17:03:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.074 17:03:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.074 17:03:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.074 17:03:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.074 17:03:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.074 17:03:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.074 17:03:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.074 17:03:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:13.074 17:03:49 rpc -- scripts/common.sh@345 -- # : 1 00:16:13.074 17:03:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.074 17:03:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.074 17:03:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:16:13.074 17:03:49 rpc -- scripts/common.sh@353 -- # local d=1 00:16:13.074 17:03:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.074 17:03:49 rpc -- scripts/common.sh@355 -- # echo 1 00:16:13.075 17:03:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.075 17:03:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:16:13.075 17:03:49 rpc -- scripts/common.sh@353 -- # local d=2 00:16:13.075 17:03:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.075 17:03:49 rpc -- scripts/common.sh@355 -- # echo 2 00:16:13.075 17:03:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.075 17:03:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.075 17:03:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.075 17:03:49 rpc -- scripts/common.sh@368 -- # return 0 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:13.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.075 --rc genhtml_branch_coverage=1 00:16:13.075 --rc genhtml_function_coverage=1 00:16:13.075 --rc genhtml_legend=1 00:16:13.075 --rc geninfo_all_blocks=1 00:16:13.075 --rc geninfo_unexecuted_blocks=1 00:16:13.075 00:16:13.075 ' 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:13.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.075 --rc genhtml_branch_coverage=1 00:16:13.075 --rc genhtml_function_coverage=1 00:16:13.075 --rc genhtml_legend=1 00:16:13.075 --rc geninfo_all_blocks=1 00:16:13.075 --rc geninfo_unexecuted_blocks=1 00:16:13.075 00:16:13.075 ' 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:13.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.075 --rc genhtml_branch_coverage=1 00:16:13.075 --rc genhtml_function_coverage=1 00:16:13.075 --rc genhtml_legend=1 00:16:13.075 --rc geninfo_all_blocks=1 00:16:13.075 --rc geninfo_unexecuted_blocks=1 00:16:13.075 00:16:13.075 ' 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:13.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.075 --rc genhtml_branch_coverage=1 00:16:13.075 --rc genhtml_function_coverage=1 00:16:13.075 --rc genhtml_legend=1 00:16:13.075 --rc geninfo_all_blocks=1 00:16:13.075 --rc geninfo_unexecuted_blocks=1 00:16:13.075 00:16:13.075 ' 00:16:13.075 17:03:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56251 00:16:13.075 17:03:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:16:13.075 17:03:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:13.075 17:03:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56251 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@833 -- # '[' -z 56251 ']' 00:16:13.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:13.075 17:03:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.075 [2024-11-08 17:03:49.733889] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:13.075 [2024-11-08 17:03:49.734055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56251 ] 00:16:13.333 [2024-11-08 17:03:49.898469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.333 [2024-11-08 17:03:49.999379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:16:13.333 [2024-11-08 17:03:49.999444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56251' to capture a snapshot of events at runtime. 00:16:13.333 [2024-11-08 17:03:49.999454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.333 [2024-11-08 17:03:49.999464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.333 [2024-11-08 17:03:49.999471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56251 for offline analysis/debug. 00:16:13.333 [2024-11-08 17:03:50.000324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.897 17:03:50 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:13.897 17:03:50 rpc -- common/autotest_common.sh@866 -- # return 0 00:16:13.898 17:03:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:13.898 17:03:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:13.898 17:03:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:16:13.898 17:03:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:16:13.898 17:03:50 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:13.898 17:03:50 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:13.898 17:03:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.898 ************************************ 00:16:13.898 START TEST rpc_integrity 00:16:13.898 ************************************ 00:16:13.898 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.155 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:16:14.155 { 00:16:14.155 "name": "Malloc0", 00:16:14.155 "aliases": [ 00:16:14.155 "e7801285-39b2-4acc-b4f0-e03ba1d39370" 00:16:14.155 ], 00:16:14.155 "product_name": "Malloc disk", 00:16:14.155 "block_size": 512, 00:16:14.155 "num_blocks": 16384, 00:16:14.155 "uuid": "e7801285-39b2-4acc-b4f0-e03ba1d39370", 00:16:14.155 "assigned_rate_limits": { 00:16:14.155 "rw_ios_per_sec": 0, 00:16:14.155 "rw_mbytes_per_sec": 0, 00:16:14.155 "r_mbytes_per_sec": 0, 00:16:14.155 "w_mbytes_per_sec": 0 00:16:14.155 }, 00:16:14.155 "claimed": false, 00:16:14.155 "zoned": false, 00:16:14.155 "supported_io_types": { 00:16:14.155 "read": true, 00:16:14.155 "write": true, 00:16:14.155 "unmap": true, 00:16:14.155 "flush": true, 00:16:14.155 "reset": true, 00:16:14.155 "nvme_admin": false, 00:16:14.155 "nvme_io": false, 00:16:14.155 "nvme_io_md": false, 00:16:14.155 "write_zeroes": true, 00:16:14.155 "zcopy": true, 00:16:14.155 "get_zone_info": false, 00:16:14.155 "zone_management": false, 00:16:14.155 "zone_append": false, 00:16:14.155 "compare": false, 00:16:14.155 "compare_and_write": false, 00:16:14.155 "abort": true, 00:16:14.155 "seek_hole": false, 00:16:14.155 "seek_data": false, 00:16:14.155 "copy": true, 00:16:14.155 "nvme_iov_md": false 00:16:14.155 }, 00:16:14.155 "memory_domains": [ 00:16:14.155 { 00:16:14.155 "dma_device_id": "system", 00:16:14.155 "dma_device_type": 1 00:16:14.155 }, 00:16:14.155 { 00:16:14.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.155 "dma_device_type": 2 00:16:14.155 } 00:16:14.155 ], 00:16:14.155 "driver_specific": {} 00:16:14.155 } 00:16:14.155 ]' 00:16:14.155 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.156 [2024-11-08 17:03:50.723822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:16:14.156 [2024-11-08 17:03:50.723898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.156 [2024-11-08 17:03:50.723923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:14.156 [2024-11-08 17:03:50.723937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.156 [2024-11-08 17:03:50.726194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.156 [2024-11-08 17:03:50.726237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:16:14.156 Passthru0 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:16:14.156 { 00:16:14.156 "name": "Malloc0", 00:16:14.156 "aliases": [ 00:16:14.156 "e7801285-39b2-4acc-b4f0-e03ba1d39370" 00:16:14.156 ], 00:16:14.156 "product_name": "Malloc disk", 00:16:14.156 "block_size": 512, 00:16:14.156 "num_blocks": 16384, 00:16:14.156 "uuid": "e7801285-39b2-4acc-b4f0-e03ba1d39370", 00:16:14.156 "assigned_rate_limits": { 00:16:14.156 "rw_ios_per_sec": 0, 00:16:14.156 "rw_mbytes_per_sec": 0, 00:16:14.156 "r_mbytes_per_sec": 0, 00:16:14.156 "w_mbytes_per_sec": 0 00:16:14.156 }, 00:16:14.156 "claimed": true, 00:16:14.156 "claim_type": "exclusive_write", 00:16:14.156 "zoned": false, 00:16:14.156 "supported_io_types": { 00:16:14.156 "read": true, 00:16:14.156 "write": true, 00:16:14.156 "unmap": true, 00:16:14.156 "flush": true, 00:16:14.156 "reset": true, 00:16:14.156 "nvme_admin": false, 00:16:14.156 "nvme_io": false, 00:16:14.156 "nvme_io_md": false, 00:16:14.156 "write_zeroes": true, 00:16:14.156 "zcopy": true, 00:16:14.156 "get_zone_info": false, 00:16:14.156 "zone_management": false, 00:16:14.156 "zone_append": false, 00:16:14.156 "compare": false, 00:16:14.156 "compare_and_write": false, 00:16:14.156 "abort": true, 00:16:14.156 "seek_hole": false, 00:16:14.156 "seek_data": false, 00:16:14.156 "copy": true, 00:16:14.156 "nvme_iov_md": false 00:16:14.156 }, 00:16:14.156 "memory_domains": [ 00:16:14.156 { 00:16:14.156 "dma_device_id": "system", 00:16:14.156 "dma_device_type": 1 00:16:14.156 }, 00:16:14.156 { 00:16:14.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.156 "dma_device_type": 2 00:16:14.156 } 00:16:14.156 ], 00:16:14.156 "driver_specific": {} 00:16:14.156 }, 00:16:14.156 { 00:16:14.156 "name": "Passthru0", 00:16:14.156 "aliases": [ 00:16:14.156 "27bf88bd-3821-5a83-ac25-9ca0aa528b97" 00:16:14.156 ], 00:16:14.156 "product_name": "passthru", 00:16:14.156 "block_size": 512, 00:16:14.156 "num_blocks": 16384, 00:16:14.156 "uuid": "27bf88bd-3821-5a83-ac25-9ca0aa528b97", 00:16:14.156 "assigned_rate_limits": { 00:16:14.156 "rw_ios_per_sec": 0, 00:16:14.156 "rw_mbytes_per_sec": 0, 00:16:14.156 "r_mbytes_per_sec": 0, 00:16:14.156 "w_mbytes_per_sec": 0 00:16:14.156 }, 00:16:14.156 "claimed": false, 00:16:14.156 "zoned": false, 00:16:14.156 "supported_io_types": { 00:16:14.156 "read": true, 00:16:14.156 "write": true, 00:16:14.156 "unmap": true, 00:16:14.156 "flush": true, 00:16:14.156 "reset": true, 00:16:14.156 "nvme_admin": false, 00:16:14.156 "nvme_io": false, 00:16:14.156 "nvme_io_md": false, 00:16:14.156 "write_zeroes": true, 00:16:14.156 "zcopy": true, 00:16:14.156 "get_zone_info": false, 00:16:14.156 "zone_management": false, 00:16:14.156 "zone_append": false, 00:16:14.156 "compare": false, 00:16:14.156 "compare_and_write": false, 00:16:14.156 "abort": true, 00:16:14.156 "seek_hole": false, 00:16:14.156 "seek_data": false, 00:16:14.156 "copy": true, 00:16:14.156 "nvme_iov_md": false 00:16:14.156 }, 00:16:14.156 "memory_domains": [ 00:16:14.156 { 00:16:14.156 "dma_device_id": "system", 00:16:14.156 "dma_device_type": 1 00:16:14.156 }, 00:16:14.156 { 00:16:14.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.156 "dma_device_type": 2 00:16:14.156 } 00:16:14.156 ], 00:16:14.156 "driver_specific": { 00:16:14.156 "passthru": { 00:16:14.156 "name": "Passthru0", 00:16:14.156 "base_bdev_name": "Malloc0" 00:16:14.156 } 00:16:14.156 } 00:16:14.156 } 00:16:14.156 ]' 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.156 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:16:14.156 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:16:14.414 ************************************ 00:16:14.414 END TEST rpc_integrity 00:16:14.414 ************************************ 00:16:14.414 17:03:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:16:14.414 00:16:14.414 real 0m0.265s 00:16:14.414 user 0m0.139s 00:16:14.414 sys 0m0.037s 00:16:14.414 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.414 17:03:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 17:03:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:16:14.414 17:03:50 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:14.414 17:03:50 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.414 17:03:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 ************************************ 00:16:14.414 START TEST rpc_plugins 00:16:14.414 ************************************ 00:16:14.414 17:03:50 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:16:14.414 17:03:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:16:14.414 17:03:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.414 17:03:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 17:03:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.414 17:03:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:16:14.414 17:03:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:16:14.414 17:03:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.414 17:03:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 17:03:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.414 17:03:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:16:14.414 { 00:16:14.414 "name": "Malloc1", 00:16:14.414 "aliases": [ 00:16:14.414 "70229a2a-b9b8-43ab-8ebd-04549efb3c92" 00:16:14.414 ], 00:16:14.414 "product_name": "Malloc disk", 00:16:14.414 "block_size": 4096, 00:16:14.414 "num_blocks": 256, 00:16:14.414 "uuid": "70229a2a-b9b8-43ab-8ebd-04549efb3c92", 00:16:14.414 "assigned_rate_limits": { 00:16:14.414 "rw_ios_per_sec": 0, 00:16:14.414 "rw_mbytes_per_sec": 0, 00:16:14.414 "r_mbytes_per_sec": 0, 00:16:14.414 "w_mbytes_per_sec": 0 00:16:14.414 }, 00:16:14.414 "claimed": false, 00:16:14.414 "zoned": false, 00:16:14.414 "supported_io_types": { 00:16:14.414 "read": true, 00:16:14.414 "write": true, 00:16:14.414 "unmap": true, 00:16:14.414 "flush": true, 00:16:14.414 "reset": true, 00:16:14.414 "nvme_admin": false, 00:16:14.414 "nvme_io": false, 00:16:14.414 "nvme_io_md": false, 00:16:14.414 "write_zeroes": true, 00:16:14.414 "zcopy": true, 00:16:14.414 "get_zone_info": false, 00:16:14.414 "zone_management": false, 00:16:14.414 "zone_append": false, 00:16:14.414 "compare": false, 00:16:14.414 "compare_and_write": false, 00:16:14.414 "abort": true, 00:16:14.414 "seek_hole": false, 00:16:14.414 "seek_data": false, 00:16:14.414 "copy": true, 00:16:14.414 "nvme_iov_md": false 00:16:14.414 }, 00:16:14.414 "memory_domains": [ 00:16:14.414 { 00:16:14.414 "dma_device_id": "system", 00:16:14.414 "dma_device_type": 1 00:16:14.414 }, 00:16:14.414 { 00:16:14.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.414 "dma_device_type": 2 00:16:14.414 } 00:16:14.414 ], 00:16:14.414 "driver_specific": {} 00:16:14.414 } 00:16:14.414 ]' 00:16:14.414 17:03:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:16:14.414 17:03:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:16:14.414 17:03:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.414 17:03:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.414 17:03:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:16:14.414 17:03:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:16:14.414 ************************************ 00:16:14.414 END TEST rpc_plugins 00:16:14.414 ************************************ 00:16:14.414 17:03:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:16:14.414 00:16:14.414 real 0m0.117s 00:16:14.414 user 0m0.063s 00:16:14.414 sys 0m0.019s 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.414 17:03:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 17:03:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:16:14.414 17:03:51 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:14.414 17:03:51 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.414 17:03:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 ************************************ 00:16:14.414 START TEST rpc_trace_cmd_test 00:16:14.414 ************************************ 00:16:14.414 17:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:16:14.414 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:16:14.414 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:16:14.414 17:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.414 17:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.671 17:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.671 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:16:14.671 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56251", 00:16:14.671 "tpoint_group_mask": "0x8", 00:16:14.671 "iscsi_conn": { 00:16:14.671 "mask": "0x2", 00:16:14.671 "tpoint_mask": "0x0" 00:16:14.671 }, 00:16:14.672 "scsi": { 00:16:14.672 "mask": "0x4", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "bdev": { 00:16:14.672 "mask": "0x8", 00:16:14.672 "tpoint_mask": "0xffffffffffffffff" 00:16:14.672 }, 00:16:14.672 "nvmf_rdma": { 00:16:14.672 "mask": "0x10", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "nvmf_tcp": { 00:16:14.672 "mask": "0x20", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "ftl": { 00:16:14.672 "mask": "0x40", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "blobfs": { 00:16:14.672 "mask": "0x80", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "dsa": { 00:16:14.672 "mask": "0x200", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "thread": { 00:16:14.672 "mask": "0x400", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "nvme_pcie": { 00:16:14.672 "mask": "0x800", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "iaa": { 00:16:14.672 "mask": "0x1000", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "nvme_tcp": { 00:16:14.672 "mask": "0x2000", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "bdev_nvme": { 00:16:14.672 "mask": "0x4000", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "sock": { 00:16:14.672 "mask": "0x8000", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "blob": { 00:16:14.672 "mask": "0x10000", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "bdev_raid": { 00:16:14.672 "mask": "0x20000", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 }, 00:16:14.672 "scheduler": { 00:16:14.672 "mask": "0x40000", 00:16:14.672 "tpoint_mask": "0x0" 00:16:14.672 } 00:16:14.672 }' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:16:14.672 ************************************ 00:16:14.672 END TEST rpc_trace_cmd_test 00:16:14.672 ************************************ 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:16:14.672 00:16:14.672 real 0m0.199s 00:16:14.672 user 0m0.157s 00:16:14.672 sys 0m0.028s 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.672 17:03:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.672 17:03:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:16:14.672 17:03:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:16:14.672 17:03:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:16:14.672 17:03:51 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:14.672 17:03:51 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.672 17:03:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.672 ************************************ 00:16:14.672 START TEST rpc_daemon_integrity 00:16:14.672 ************************************ 00:16:14.672 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:16:14.672 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:14.672 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.672 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.672 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.672 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:16:14.672 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.929 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:16:14.930 { 00:16:14.930 "name": "Malloc2", 00:16:14.930 "aliases": [ 00:16:14.930 "1d5871e7-75b4-4eb1-b6d8-420373de3ab3" 00:16:14.930 ], 00:16:14.930 "product_name": "Malloc disk", 00:16:14.930 "block_size": 512, 00:16:14.930 "num_blocks": 16384, 00:16:14.930 "uuid": "1d5871e7-75b4-4eb1-b6d8-420373de3ab3", 00:16:14.930 "assigned_rate_limits": { 00:16:14.930 "rw_ios_per_sec": 0, 00:16:14.930 "rw_mbytes_per_sec": 0, 00:16:14.930 "r_mbytes_per_sec": 0, 00:16:14.930 "w_mbytes_per_sec": 0 00:16:14.930 }, 00:16:14.930 "claimed": false, 00:16:14.930 "zoned": false, 00:16:14.930 "supported_io_types": { 00:16:14.930 "read": true, 00:16:14.930 "write": true, 00:16:14.930 "unmap": true, 00:16:14.930 "flush": true, 00:16:14.930 "reset": true, 00:16:14.930 "nvme_admin": false, 00:16:14.930 "nvme_io": false, 00:16:14.930 "nvme_io_md": false, 00:16:14.930 "write_zeroes": true, 00:16:14.930 "zcopy": true, 00:16:14.930 "get_zone_info": false, 00:16:14.930 "zone_management": false, 00:16:14.930 "zone_append": false, 00:16:14.930 "compare": false, 00:16:14.930 "compare_and_write": false, 00:16:14.930 "abort": true, 00:16:14.930 "seek_hole": false, 00:16:14.930 "seek_data": false, 00:16:14.930 "copy": true, 00:16:14.930 "nvme_iov_md": false 00:16:14.930 }, 00:16:14.930 "memory_domains": [ 00:16:14.930 { 00:16:14.930 "dma_device_id": "system", 00:16:14.930 "dma_device_type": 1 00:16:14.930 }, 00:16:14.930 { 00:16:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.930 "dma_device_type": 2 00:16:14.930 } 00:16:14.930 ], 00:16:14.930 "driver_specific": {} 00:16:14.930 } 00:16:14.930 ]' 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 [2024-11-08 17:03:51.482607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:16:14.930 [2024-11-08 17:03:51.482686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.930 [2024-11-08 17:03:51.482711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:14.930 [2024-11-08 17:03:51.482723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.930 [2024-11-08 17:03:51.485132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.930 [2024-11-08 17:03:51.485176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:16:14.930 Passthru0 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:16:14.930 { 00:16:14.930 "name": "Malloc2", 00:16:14.930 "aliases": [ 00:16:14.930 "1d5871e7-75b4-4eb1-b6d8-420373de3ab3" 00:16:14.930 ], 00:16:14.930 "product_name": "Malloc disk", 00:16:14.930 "block_size": 512, 00:16:14.930 "num_blocks": 16384, 00:16:14.930 "uuid": "1d5871e7-75b4-4eb1-b6d8-420373de3ab3", 00:16:14.930 "assigned_rate_limits": { 00:16:14.930 "rw_ios_per_sec": 0, 00:16:14.930 "rw_mbytes_per_sec": 0, 00:16:14.930 "r_mbytes_per_sec": 0, 00:16:14.930 "w_mbytes_per_sec": 0 00:16:14.930 }, 00:16:14.930 "claimed": true, 00:16:14.930 "claim_type": "exclusive_write", 00:16:14.930 "zoned": false, 00:16:14.930 "supported_io_types": { 00:16:14.930 "read": true, 00:16:14.930 "write": true, 00:16:14.930 "unmap": true, 00:16:14.930 "flush": true, 00:16:14.930 "reset": true, 00:16:14.930 "nvme_admin": false, 00:16:14.930 "nvme_io": false, 00:16:14.930 "nvme_io_md": false, 00:16:14.930 "write_zeroes": true, 00:16:14.930 "zcopy": true, 00:16:14.930 "get_zone_info": false, 00:16:14.930 "zone_management": false, 00:16:14.930 "zone_append": false, 00:16:14.930 "compare": false, 00:16:14.930 "compare_and_write": false, 00:16:14.930 "abort": true, 00:16:14.930 "seek_hole": false, 00:16:14.930 "seek_data": false, 00:16:14.930 "copy": true, 00:16:14.930 "nvme_iov_md": false 00:16:14.930 }, 00:16:14.930 "memory_domains": [ 00:16:14.930 { 00:16:14.930 "dma_device_id": "system", 00:16:14.930 "dma_device_type": 1 00:16:14.930 }, 00:16:14.930 { 00:16:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.930 "dma_device_type": 2 00:16:14.930 } 00:16:14.930 ], 00:16:14.930 "driver_specific": {} 00:16:14.930 }, 00:16:14.930 { 00:16:14.930 "name": "Passthru0", 00:16:14.930 "aliases": [ 00:16:14.930 "2e3ee4ab-0a9a-5339-a073-e54147ef4fa9" 00:16:14.930 ], 00:16:14.930 "product_name": "passthru", 00:16:14.930 "block_size": 512, 00:16:14.930 "num_blocks": 16384, 00:16:14.930 "uuid": "2e3ee4ab-0a9a-5339-a073-e54147ef4fa9", 00:16:14.930 "assigned_rate_limits": { 00:16:14.930 "rw_ios_per_sec": 0, 00:16:14.930 "rw_mbytes_per_sec": 0, 00:16:14.930 "r_mbytes_per_sec": 0, 00:16:14.930 "w_mbytes_per_sec": 0 00:16:14.930 }, 00:16:14.930 "claimed": false, 00:16:14.930 "zoned": false, 00:16:14.930 "supported_io_types": { 00:16:14.930 "read": true, 00:16:14.930 "write": true, 00:16:14.930 "unmap": true, 00:16:14.930 "flush": true, 00:16:14.930 "reset": true, 00:16:14.930 "nvme_admin": false, 00:16:14.930 "nvme_io": false, 00:16:14.930 "nvme_io_md": false, 00:16:14.930 "write_zeroes": true, 00:16:14.930 "zcopy": true, 00:16:14.930 "get_zone_info": false, 00:16:14.930 "zone_management": false, 00:16:14.930 "zone_append": false, 00:16:14.930 "compare": false, 00:16:14.930 "compare_and_write": false, 00:16:14.930 "abort": true, 00:16:14.930 "seek_hole": false, 00:16:14.930 "seek_data": false, 00:16:14.930 "copy": true, 00:16:14.930 "nvme_iov_md": false 00:16:14.930 }, 00:16:14.930 "memory_domains": [ 00:16:14.930 { 00:16:14.930 "dma_device_id": "system", 00:16:14.930 "dma_device_type": 1 00:16:14.930 }, 00:16:14.930 { 00:16:14.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.930 "dma_device_type": 2 00:16:14.930 } 00:16:14.930 ], 00:16:14.930 "driver_specific": { 00:16:14.930 "passthru": { 00:16:14.930 "name": "Passthru0", 00:16:14.930 "base_bdev_name": "Malloc2" 00:16:14.930 } 00:16:14.930 } 00:16:14.930 } 00:16:14.930 ]' 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:16:14.930 ************************************ 00:16:14.930 END TEST rpc_daemon_integrity 00:16:14.930 ************************************ 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:16:14.930 00:16:14.930 real 0m0.244s 00:16:14.930 user 0m0.124s 00:16:14.930 sys 0m0.038s 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.930 17:03:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:15.188 17:03:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:15.188 17:03:51 rpc -- rpc/rpc.sh@84 -- # killprocess 56251 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@952 -- # '[' -z 56251 ']' 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@956 -- # kill -0 56251 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@957 -- # uname 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56251 00:16:15.188 killing process with pid 56251 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56251' 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@971 -- # kill 56251 00:16:15.188 17:03:51 rpc -- common/autotest_common.sh@976 -- # wait 56251 00:16:16.569 ************************************ 00:16:16.569 END TEST rpc 00:16:16.569 ************************************ 00:16:16.569 00:16:16.569 real 0m3.787s 00:16:16.569 user 0m4.189s 00:16:16.569 sys 0m0.666s 00:16:16.569 17:03:53 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:16.569 17:03:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.830 17:03:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:16:16.830 17:03:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:16.830 17:03:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.830 17:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.830 ************************************ 00:16:16.830 START TEST skip_rpc 00:16:16.830 ************************************ 00:16:16.830 17:03:53 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:16:16.830 * Looking for test storage... 00:16:16.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:16:16.830 17:03:53 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:16.830 17:03:53 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:16.830 17:03:53 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:16.830 17:03:53 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:16.830 17:03:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.831 17:03:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.831 --rc genhtml_branch_coverage=1 00:16:16.831 --rc genhtml_function_coverage=1 00:16:16.831 --rc genhtml_legend=1 00:16:16.831 --rc geninfo_all_blocks=1 00:16:16.831 --rc geninfo_unexecuted_blocks=1 00:16:16.831 00:16:16.831 ' 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.831 --rc genhtml_branch_coverage=1 00:16:16.831 --rc genhtml_function_coverage=1 00:16:16.831 --rc genhtml_legend=1 00:16:16.831 --rc geninfo_all_blocks=1 00:16:16.831 --rc geninfo_unexecuted_blocks=1 00:16:16.831 00:16:16.831 ' 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.831 --rc genhtml_branch_coverage=1 00:16:16.831 --rc genhtml_function_coverage=1 00:16:16.831 --rc genhtml_legend=1 00:16:16.831 --rc geninfo_all_blocks=1 00:16:16.831 --rc geninfo_unexecuted_blocks=1 00:16:16.831 00:16:16.831 ' 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:16.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.831 --rc genhtml_branch_coverage=1 00:16:16.831 --rc genhtml_function_coverage=1 00:16:16.831 --rc genhtml_legend=1 00:16:16.831 --rc geninfo_all_blocks=1 00:16:16.831 --rc geninfo_unexecuted_blocks=1 00:16:16.831 00:16:16.831 ' 00:16:16.831 17:03:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:16.831 17:03:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:16.831 17:03:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.831 17:03:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.831 ************************************ 00:16:16.831 START TEST skip_rpc 00:16:16.831 ************************************ 00:16:16.831 17:03:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:16:16.831 17:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56464 00:16:16.831 17:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:16.831 17:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:16:16.831 17:03:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:16:17.103 [2024-11-08 17:03:53.591455] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:17.103 [2024-11-08 17:03:53.591749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56464 ] 00:16:17.103 [2024-11-08 17:03:53.754658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.360 [2024-11-08 17:03:53.874976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56464 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56464 ']' 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56464 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56464 00:16:22.621 killing process with pid 56464 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56464' 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56464 00:16:22.621 17:03:58 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56464 00:16:23.552 00:16:23.552 real 0m6.614s 00:16:23.552 user 0m6.173s 00:16:23.552 sys 0m0.332s 00:16:23.552 ************************************ 00:16:23.552 END TEST skip_rpc 00:16:23.552 ************************************ 00:16:23.552 17:04:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.552 17:04:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.552 17:04:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:16:23.552 17:04:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:23.552 17:04:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:23.552 17:04:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.552 ************************************ 00:16:23.552 START TEST skip_rpc_with_json 00:16:23.553 ************************************ 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56562 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56562 00:16:23.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56562 ']' 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:23.553 17:04:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:23.810 [2024-11-08 17:04:00.280389] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:23.810 [2024-11-08 17:04:00.280524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56562 ] 00:16:23.810 [2024-11-08 17:04:00.441117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.069 [2024-11-08 17:04:00.559901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:24.635 [2024-11-08 17:04:01.220178] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:16:24.635 request: 00:16:24.635 { 00:16:24.635 "trtype": "tcp", 00:16:24.635 "method": "nvmf_get_transports", 00:16:24.635 "req_id": 1 00:16:24.635 } 00:16:24.635 Got JSON-RPC error response 00:16:24.635 response: 00:16:24.635 { 00:16:24.635 "code": -19, 00:16:24.635 "message": "No such device" 00:16:24.635 } 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:24.635 [2024-11-08 17:04:01.232301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.635 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:24.894 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.894 17:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:24.894 { 00:16:24.894 "subsystems": [ 00:16:24.894 { 00:16:24.894 "subsystem": "fsdev", 00:16:24.894 "config": [ 00:16:24.894 { 00:16:24.894 "method": "fsdev_set_opts", 00:16:24.894 "params": { 00:16:24.894 "fsdev_io_pool_size": 65535, 00:16:24.894 "fsdev_io_cache_size": 256 00:16:24.894 } 00:16:24.894 } 00:16:24.894 ] 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "subsystem": "keyring", 00:16:24.894 "config": [] 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "subsystem": "iobuf", 00:16:24.894 "config": [ 00:16:24.894 { 00:16:24.894 "method": "iobuf_set_options", 00:16:24.894 "params": { 00:16:24.894 "small_pool_count": 8192, 00:16:24.894 "large_pool_count": 1024, 00:16:24.894 "small_bufsize": 8192, 00:16:24.894 "large_bufsize": 135168, 00:16:24.894 "enable_numa": false 00:16:24.894 } 00:16:24.894 } 00:16:24.894 ] 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "subsystem": "sock", 00:16:24.894 "config": [ 00:16:24.894 { 00:16:24.894 "method": "sock_set_default_impl", 00:16:24.894 "params": { 00:16:24.894 "impl_name": "posix" 00:16:24.894 } 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "method": "sock_impl_set_options", 00:16:24.894 "params": { 00:16:24.894 "impl_name": "ssl", 00:16:24.894 "recv_buf_size": 4096, 00:16:24.894 "send_buf_size": 4096, 00:16:24.894 "enable_recv_pipe": true, 00:16:24.894 "enable_quickack": false, 00:16:24.894 "enable_placement_id": 0, 00:16:24.894 "enable_zerocopy_send_server": true, 00:16:24.894 "enable_zerocopy_send_client": false, 00:16:24.894 "zerocopy_threshold": 0, 00:16:24.894 "tls_version": 0, 00:16:24.894 "enable_ktls": false 00:16:24.894 } 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "method": "sock_impl_set_options", 00:16:24.894 "params": { 00:16:24.894 "impl_name": "posix", 00:16:24.894 "recv_buf_size": 2097152, 00:16:24.894 "send_buf_size": 2097152, 00:16:24.894 "enable_recv_pipe": true, 00:16:24.894 "enable_quickack": false, 00:16:24.894 "enable_placement_id": 0, 00:16:24.894 "enable_zerocopy_send_server": true, 00:16:24.894 "enable_zerocopy_send_client": false, 00:16:24.894 "zerocopy_threshold": 0, 00:16:24.894 "tls_version": 0, 00:16:24.894 "enable_ktls": false 00:16:24.894 } 00:16:24.894 } 00:16:24.894 ] 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "subsystem": "vmd", 00:16:24.894 "config": [] 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "subsystem": "accel", 00:16:24.894 "config": [ 00:16:24.894 { 00:16:24.894 "method": "accel_set_options", 00:16:24.894 "params": { 00:16:24.894 "small_cache_size": 128, 00:16:24.894 "large_cache_size": 16, 00:16:24.894 "task_count": 2048, 00:16:24.894 "sequence_count": 2048, 00:16:24.894 "buf_count": 2048 00:16:24.894 } 00:16:24.894 } 00:16:24.894 ] 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "subsystem": "bdev", 00:16:24.894 "config": [ 00:16:24.894 { 00:16:24.894 "method": "bdev_set_options", 00:16:24.894 "params": { 00:16:24.894 "bdev_io_pool_size": 65535, 00:16:24.894 "bdev_io_cache_size": 256, 00:16:24.894 "bdev_auto_examine": true, 00:16:24.894 "iobuf_small_cache_size": 128, 00:16:24.894 "iobuf_large_cache_size": 16 00:16:24.894 } 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "method": "bdev_raid_set_options", 00:16:24.894 "params": { 00:16:24.894 "process_window_size_kb": 1024, 00:16:24.894 "process_max_bandwidth_mb_sec": 0 00:16:24.894 } 00:16:24.894 }, 00:16:24.894 { 00:16:24.894 "method": "bdev_iscsi_set_options", 00:16:24.895 "params": { 00:16:24.895 "timeout_sec": 30 00:16:24.895 } 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "method": "bdev_nvme_set_options", 00:16:24.895 "params": { 00:16:24.895 "action_on_timeout": "none", 00:16:24.895 "timeout_us": 0, 00:16:24.895 "timeout_admin_us": 0, 00:16:24.895 "keep_alive_timeout_ms": 10000, 00:16:24.895 "arbitration_burst": 0, 00:16:24.895 "low_priority_weight": 0, 00:16:24.895 "medium_priority_weight": 0, 00:16:24.895 "high_priority_weight": 0, 00:16:24.895 "nvme_adminq_poll_period_us": 10000, 00:16:24.895 "nvme_ioq_poll_period_us": 0, 00:16:24.895 "io_queue_requests": 0, 00:16:24.895 "delay_cmd_submit": true, 00:16:24.895 "transport_retry_count": 4, 00:16:24.895 "bdev_retry_count": 3, 00:16:24.895 "transport_ack_timeout": 0, 00:16:24.895 "ctrlr_loss_timeout_sec": 0, 00:16:24.895 "reconnect_delay_sec": 0, 00:16:24.895 "fast_io_fail_timeout_sec": 0, 00:16:24.895 "disable_auto_failback": false, 00:16:24.895 "generate_uuids": false, 00:16:24.895 "transport_tos": 0, 00:16:24.895 "nvme_error_stat": false, 00:16:24.895 "rdma_srq_size": 0, 00:16:24.895 "io_path_stat": false, 00:16:24.895 "allow_accel_sequence": false, 00:16:24.895 "rdma_max_cq_size": 0, 00:16:24.895 "rdma_cm_event_timeout_ms": 0, 00:16:24.895 "dhchap_digests": [ 00:16:24.895 "sha256", 00:16:24.895 "sha384", 00:16:24.895 "sha512" 00:16:24.895 ], 00:16:24.895 "dhchap_dhgroups": [ 00:16:24.895 "null", 00:16:24.895 "ffdhe2048", 00:16:24.895 "ffdhe3072", 00:16:24.895 "ffdhe4096", 00:16:24.895 "ffdhe6144", 00:16:24.895 "ffdhe8192" 00:16:24.895 ] 00:16:24.895 } 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "method": "bdev_nvme_set_hotplug", 00:16:24.895 "params": { 00:16:24.895 "period_us": 100000, 00:16:24.895 "enable": false 00:16:24.895 } 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "method": "bdev_wait_for_examine" 00:16:24.895 } 00:16:24.895 ] 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "scsi", 00:16:24.895 "config": null 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "scheduler", 00:16:24.895 "config": [ 00:16:24.895 { 00:16:24.895 "method": "framework_set_scheduler", 00:16:24.895 "params": { 00:16:24.895 "name": "static" 00:16:24.895 } 00:16:24.895 } 00:16:24.895 ] 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "vhost_scsi", 00:16:24.895 "config": [] 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "vhost_blk", 00:16:24.895 "config": [] 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "ublk", 00:16:24.895 "config": [] 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "nbd", 00:16:24.895 "config": [] 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "nvmf", 00:16:24.895 "config": [ 00:16:24.895 { 00:16:24.895 "method": "nvmf_set_config", 00:16:24.895 "params": { 00:16:24.895 "discovery_filter": "match_any", 00:16:24.895 "admin_cmd_passthru": { 00:16:24.895 "identify_ctrlr": false 00:16:24.895 }, 00:16:24.895 "dhchap_digests": [ 00:16:24.895 "sha256", 00:16:24.895 "sha384", 00:16:24.895 "sha512" 00:16:24.895 ], 00:16:24.895 "dhchap_dhgroups": [ 00:16:24.895 "null", 00:16:24.895 "ffdhe2048", 00:16:24.895 "ffdhe3072", 00:16:24.895 "ffdhe4096", 00:16:24.895 "ffdhe6144", 00:16:24.895 "ffdhe8192" 00:16:24.895 ] 00:16:24.895 } 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "method": "nvmf_set_max_subsystems", 00:16:24.895 "params": { 00:16:24.895 "max_subsystems": 1024 00:16:24.895 } 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "method": "nvmf_set_crdt", 00:16:24.895 "params": { 00:16:24.895 "crdt1": 0, 00:16:24.895 "crdt2": 0, 00:16:24.895 "crdt3": 0 00:16:24.895 } 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "method": "nvmf_create_transport", 00:16:24.895 "params": { 00:16:24.895 "trtype": "TCP", 00:16:24.895 "max_queue_depth": 128, 00:16:24.895 "max_io_qpairs_per_ctrlr": 127, 00:16:24.895 "in_capsule_data_size": 4096, 00:16:24.895 "max_io_size": 131072, 00:16:24.895 "io_unit_size": 131072, 00:16:24.895 "max_aq_depth": 128, 00:16:24.895 "num_shared_buffers": 511, 00:16:24.895 "buf_cache_size": 4294967295, 00:16:24.895 "dif_insert_or_strip": false, 00:16:24.895 "zcopy": false, 00:16:24.895 "c2h_success": true, 00:16:24.895 "sock_priority": 0, 00:16:24.895 "abort_timeout_sec": 1, 00:16:24.895 "ack_timeout": 0, 00:16:24.895 "data_wr_pool_size": 0 00:16:24.895 } 00:16:24.895 } 00:16:24.895 ] 00:16:24.895 }, 00:16:24.895 { 00:16:24.895 "subsystem": "iscsi", 00:16:24.895 "config": [ 00:16:24.895 { 00:16:24.895 "method": "iscsi_set_options", 00:16:24.895 "params": { 00:16:24.895 "node_base": "iqn.2016-06.io.spdk", 00:16:24.895 "max_sessions": 128, 00:16:24.895 "max_connections_per_session": 2, 00:16:24.895 "max_queue_depth": 64, 00:16:24.895 "default_time2wait": 2, 00:16:24.895 "default_time2retain": 20, 00:16:24.895 "first_burst_length": 8192, 00:16:24.895 "immediate_data": true, 00:16:24.895 "allow_duplicated_isid": false, 00:16:24.895 "error_recovery_level": 0, 00:16:24.895 "nop_timeout": 60, 00:16:24.895 "nop_in_interval": 30, 00:16:24.895 "disable_chap": false, 00:16:24.895 "require_chap": false, 00:16:24.895 "mutual_chap": false, 00:16:24.895 "chap_group": 0, 00:16:24.895 "max_large_datain_per_connection": 64, 00:16:24.895 "max_r2t_per_connection": 4, 00:16:24.895 "pdu_pool_size": 36864, 00:16:24.895 "immediate_data_pool_size": 16384, 00:16:24.895 "data_out_pool_size": 2048 00:16:24.895 } 00:16:24.895 } 00:16:24.895 ] 00:16:24.895 } 00:16:24.895 ] 00:16:24.895 } 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56562 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56562 ']' 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56562 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56562 00:16:24.895 killing process with pid 56562 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56562' 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56562 00:16:24.895 17:04:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56562 00:16:26.800 17:04:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56607 00:16:26.800 17:04:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:16:26.800 17:04:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:32.070 17:04:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56607 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56607 ']' 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56607 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56607 00:16:32.071 killing process with pid 56607 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56607' 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56607 00:16:32.071 17:04:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56607 00:16:33.031 17:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:33.031 17:04:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:33.031 ************************************ 00:16:33.031 END TEST skip_rpc_with_json 00:16:33.031 ************************************ 00:16:33.031 00:16:33.031 real 0m9.532s 00:16:33.031 user 0m9.001s 00:16:33.031 sys 0m0.765s 00:16:33.031 17:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:33.031 17:04:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:33.291 17:04:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:16:33.291 17:04:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:33.291 17:04:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:33.291 17:04:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.291 ************************************ 00:16:33.291 START TEST skip_rpc_with_delay 00:16:33.291 ************************************ 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:33.291 [2024-11-08 17:04:09.877382] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.291 00:16:33.291 real 0m0.147s 00:16:33.291 user 0m0.072s 00:16:33.291 sys 0m0.074s 00:16:33.291 ************************************ 00:16:33.291 END TEST skip_rpc_with_delay 00:16:33.291 ************************************ 00:16:33.291 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:33.292 17:04:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:16:33.292 17:04:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:16:33.292 17:04:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:16:33.292 17:04:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:16:33.292 17:04:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:33.292 17:04:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:33.292 17:04:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.292 ************************************ 00:16:33.292 START TEST exit_on_failed_rpc_init 00:16:33.292 ************************************ 00:16:33.292 17:04:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:16:33.292 17:04:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56730 00:16:33.292 17:04:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56730 00:16:33.292 17:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 56730 ']' 00:16:33.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.292 17:04:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:33.292 17:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.292 17:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:33.292 17:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.292 17:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:33.292 17:04:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:16:33.550 [2024-11-08 17:04:10.081039] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:33.550 [2024-11-08 17:04:10.081177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56730 ] 00:16:33.550 [2024-11-08 17:04:10.243315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.808 [2024-11-08 17:04:10.363407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:16:34.398 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:34.398 [2024-11-08 17:04:11.104146] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:34.398 [2024-11-08 17:04:11.104277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56748 ] 00:16:34.656 [2024-11-08 17:04:11.259927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.656 [2024-11-08 17:04:11.360672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.656 [2024-11-08 17:04:11.360775] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:34.656 [2024-11-08 17:04:11.360789] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:34.656 [2024-11-08 17:04:11.360800] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56730 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 56730 ']' 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 56730 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56730 00:16:34.914 killing process with pid 56730 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56730' 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 56730 00:16:34.914 17:04:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 56730 00:16:36.823 ************************************ 00:16:36.823 END TEST exit_on_failed_rpc_init 00:16:36.823 ************************************ 00:16:36.823 00:16:36.823 real 0m3.206s 00:16:36.823 user 0m3.460s 00:16:36.823 sys 0m0.472s 00:16:36.823 17:04:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:36.823 17:04:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:16:36.823 17:04:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:36.823 ************************************ 00:16:36.823 END TEST skip_rpc 00:16:36.823 ************************************ 00:16:36.823 00:16:36.823 real 0m19.927s 00:16:36.823 user 0m18.867s 00:16:36.823 sys 0m1.830s 00:16:36.823 17:04:13 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:36.823 17:04:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.823 17:04:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:36.823 17:04:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:36.823 17:04:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:36.823 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:36.823 ************************************ 00:16:36.823 START TEST rpc_client 00:16:36.823 ************************************ 00:16:36.823 17:04:13 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:36.823 * Looking for test storage... 00:16:36.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:16:36.823 17:04:13 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:36.823 17:04:13 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:16:36.823 17:04:13 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:36.823 17:04:13 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.823 17:04:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:16:36.823 17:04:13 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.824 17:04:13 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.824 --rc genhtml_branch_coverage=1 00:16:36.824 --rc genhtml_function_coverage=1 00:16:36.824 --rc genhtml_legend=1 00:16:36.824 --rc geninfo_all_blocks=1 00:16:36.824 --rc geninfo_unexecuted_blocks=1 00:16:36.824 00:16:36.824 ' 00:16:36.824 17:04:13 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.824 --rc genhtml_branch_coverage=1 00:16:36.824 --rc genhtml_function_coverage=1 00:16:36.824 --rc genhtml_legend=1 00:16:36.824 --rc geninfo_all_blocks=1 00:16:36.824 --rc geninfo_unexecuted_blocks=1 00:16:36.824 00:16:36.824 ' 00:16:36.824 17:04:13 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.824 --rc genhtml_branch_coverage=1 00:16:36.824 --rc genhtml_function_coverage=1 00:16:36.824 --rc genhtml_legend=1 00:16:36.824 --rc geninfo_all_blocks=1 00:16:36.824 --rc geninfo_unexecuted_blocks=1 00:16:36.824 00:16:36.824 ' 00:16:36.824 17:04:13 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:36.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.824 --rc genhtml_branch_coverage=1 00:16:36.824 --rc genhtml_function_coverage=1 00:16:36.824 --rc genhtml_legend=1 00:16:36.824 --rc geninfo_all_blocks=1 00:16:36.824 --rc geninfo_unexecuted_blocks=1 00:16:36.824 00:16:36.824 ' 00:16:36.824 17:04:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:16:36.824 OK 00:16:36.824 17:04:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:16:36.824 00:16:36.824 real 0m0.201s 00:16:36.824 user 0m0.115s 00:16:36.824 sys 0m0.088s 00:16:36.824 ************************************ 00:16:36.824 17:04:13 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:36.824 17:04:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:16:36.824 END TEST rpc_client 00:16:36.824 ************************************ 00:16:37.083 17:04:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:37.083 17:04:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:37.083 17:04:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:37.083 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:37.083 ************************************ 00:16:37.083 START TEST json_config 00:16:37.083 ************************************ 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.083 17:04:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.083 17:04:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.083 17:04:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.083 17:04:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.083 17:04:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.083 17:04:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.083 17:04:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.083 17:04:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:16:37.083 17:04:13 json_config -- scripts/common.sh@345 -- # : 1 00:16:37.083 17:04:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.083 17:04:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.083 17:04:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:16:37.083 17:04:13 json_config -- scripts/common.sh@353 -- # local d=1 00:16:37.083 17:04:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.083 17:04:13 json_config -- scripts/common.sh@355 -- # echo 1 00:16:37.083 17:04:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.083 17:04:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@353 -- # local d=2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.083 17:04:13 json_config -- scripts/common.sh@355 -- # echo 2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.083 17:04:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.083 17:04:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.083 17:04:13 json_config -- scripts/common.sh@368 -- # return 0 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:37.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.083 --rc genhtml_branch_coverage=1 00:16:37.083 --rc genhtml_function_coverage=1 00:16:37.083 --rc genhtml_legend=1 00:16:37.083 --rc geninfo_all_blocks=1 00:16:37.083 --rc geninfo_unexecuted_blocks=1 00:16:37.083 00:16:37.083 ' 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:37.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.083 --rc genhtml_branch_coverage=1 00:16:37.083 --rc genhtml_function_coverage=1 00:16:37.083 --rc genhtml_legend=1 00:16:37.083 --rc geninfo_all_blocks=1 00:16:37.083 --rc geninfo_unexecuted_blocks=1 00:16:37.083 00:16:37.083 ' 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:37.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.083 --rc genhtml_branch_coverage=1 00:16:37.083 --rc genhtml_function_coverage=1 00:16:37.083 --rc genhtml_legend=1 00:16:37.083 --rc geninfo_all_blocks=1 00:16:37.083 --rc geninfo_unexecuted_blocks=1 00:16:37.083 00:16:37.083 ' 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:37.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.083 --rc genhtml_branch_coverage=1 00:16:37.083 --rc genhtml_function_coverage=1 00:16:37.083 --rc genhtml_legend=1 00:16:37.083 --rc geninfo_all_blocks=1 00:16:37.083 --rc geninfo_unexecuted_blocks=1 00:16:37.083 00:16:37.083 ' 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:046bd5e6-ee68-47d1-b992-5bf814ef401d 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=046bd5e6-ee68-47d1-b992-5bf814ef401d 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.083 17:04:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.083 17:04:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.083 17:04:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.083 17:04:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.083 17:04:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.083 17:04:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.083 17:04:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.083 17:04:13 json_config -- paths/export.sh@5 -- # export PATH 00:16:37.083 17:04:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@51 -- # : 0 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.083 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.083 17:04:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:16:37.083 WARNING: No tests are enabled so not running JSON configuration tests 00:16:37.083 17:04:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:16:37.083 00:16:37.083 real 0m0.151s 00:16:37.083 user 0m0.093s 00:16:37.083 sys 0m0.054s 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:37.083 17:04:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:37.083 ************************************ 00:16:37.083 END TEST json_config 00:16:37.083 ************************************ 00:16:37.083 17:04:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:37.083 17:04:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:37.083 17:04:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:37.083 17:04:13 -- common/autotest_common.sh@10 -- # set +x 00:16:37.342 ************************************ 00:16:37.342 START TEST json_config_extra_key 00:16:37.342 ************************************ 00:16:37.342 17:04:13 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:37.342 17:04:13 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:37.342 17:04:13 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:16:37.342 17:04:13 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:37.342 17:04:13 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:37.342 17:04:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:16:37.343 17:04:13 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.343 17:04:13 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.343 --rc genhtml_branch_coverage=1 00:16:37.343 --rc genhtml_function_coverage=1 00:16:37.343 --rc genhtml_legend=1 00:16:37.343 --rc geninfo_all_blocks=1 00:16:37.343 --rc geninfo_unexecuted_blocks=1 00:16:37.343 00:16:37.343 ' 00:16:37.343 17:04:13 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.343 --rc genhtml_branch_coverage=1 00:16:37.343 --rc genhtml_function_coverage=1 00:16:37.343 --rc genhtml_legend=1 00:16:37.343 --rc geninfo_all_blocks=1 00:16:37.343 --rc geninfo_unexecuted_blocks=1 00:16:37.343 00:16:37.343 ' 00:16:37.343 17:04:13 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.343 --rc genhtml_branch_coverage=1 00:16:37.343 --rc genhtml_function_coverage=1 00:16:37.343 --rc genhtml_legend=1 00:16:37.343 --rc geninfo_all_blocks=1 00:16:37.343 --rc geninfo_unexecuted_blocks=1 00:16:37.343 00:16:37.343 ' 00:16:37.343 17:04:13 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.343 --rc genhtml_branch_coverage=1 00:16:37.343 --rc genhtml_function_coverage=1 00:16:37.343 --rc genhtml_legend=1 00:16:37.343 --rc geninfo_all_blocks=1 00:16:37.343 --rc geninfo_unexecuted_blocks=1 00:16:37.343 00:16:37.343 ' 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:046bd5e6-ee68-47d1-b992-5bf814ef401d 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=046bd5e6-ee68-47d1-b992-5bf814ef401d 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.343 17:04:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.343 17:04:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.343 17:04:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.343 17:04:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.343 17:04:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:16:37.343 17:04:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.343 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.343 17:04:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:16:37.343 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:16:37.344 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:16:37.344 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:16:37.344 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:37.344 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:16:37.344 INFO: launching applications... 00:16:37.344 17:04:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56949 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:37.344 Waiting for target to run... 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56949 /var/tmp/spdk_tgt.sock 00:16:37.344 17:04:13 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 56949 ']' 00:16:37.344 17:04:13 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:37.344 17:04:13 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:37.344 17:04:13 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:37.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:37.344 17:04:13 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:37.344 17:04:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:37.344 17:04:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:37.344 [2024-11-08 17:04:14.027942] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:37.344 [2024-11-08 17:04:14.028064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56949 ] 00:16:37.910 [2024-11-08 17:04:14.345107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.911 [2024-11-08 17:04:14.507881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.478 00:16:38.478 INFO: shutting down applications... 00:16:38.478 17:04:15 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:38.478 17:04:15 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:16:38.478 17:04:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:16:38.478 17:04:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56949 ]] 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56949 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56949 00:16:38.478 17:04:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:39.043 17:04:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:39.043 17:04:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:39.043 17:04:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56949 00:16:39.043 17:04:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:39.609 17:04:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:39.609 17:04:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:39.609 17:04:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56949 00:16:39.609 17:04:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:39.869 17:04:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:39.869 17:04:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:39.869 17:04:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56949 00:16:39.869 17:04:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:40.435 SPDK target shutdown done 00:16:40.435 Success 00:16:40.435 17:04:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:40.435 17:04:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:40.435 17:04:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56949 00:16:40.435 17:04:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:40.435 17:04:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:16:40.435 17:04:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:40.435 17:04:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:40.435 17:04:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:16:40.435 00:16:40.435 real 0m3.256s 00:16:40.435 user 0m2.991s 00:16:40.435 sys 0m0.452s 00:16:40.435 ************************************ 00:16:40.435 END TEST json_config_extra_key 00:16:40.435 ************************************ 00:16:40.435 17:04:17 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:40.435 17:04:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:40.435 17:04:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:40.435 17:04:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:40.435 17:04:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:40.435 17:04:17 -- common/autotest_common.sh@10 -- # set +x 00:16:40.435 ************************************ 00:16:40.435 START TEST alias_rpc 00:16:40.435 ************************************ 00:16:40.435 17:04:17 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:40.694 * Looking for test storage... 00:16:40.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:16:40.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.694 17:04:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.694 --rc genhtml_branch_coverage=1 00:16:40.694 --rc genhtml_function_coverage=1 00:16:40.694 --rc genhtml_legend=1 00:16:40.694 --rc geninfo_all_blocks=1 00:16:40.694 --rc geninfo_unexecuted_blocks=1 00:16:40.694 00:16:40.694 ' 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.694 --rc genhtml_branch_coverage=1 00:16:40.694 --rc genhtml_function_coverage=1 00:16:40.694 --rc genhtml_legend=1 00:16:40.694 --rc geninfo_all_blocks=1 00:16:40.694 --rc geninfo_unexecuted_blocks=1 00:16:40.694 00:16:40.694 ' 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.694 --rc genhtml_branch_coverage=1 00:16:40.694 --rc genhtml_function_coverage=1 00:16:40.694 --rc genhtml_legend=1 00:16:40.694 --rc geninfo_all_blocks=1 00:16:40.694 --rc geninfo_unexecuted_blocks=1 00:16:40.694 00:16:40.694 ' 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.694 --rc genhtml_branch_coverage=1 00:16:40.694 --rc genhtml_function_coverage=1 00:16:40.694 --rc genhtml_legend=1 00:16:40.694 --rc geninfo_all_blocks=1 00:16:40.694 --rc geninfo_unexecuted_blocks=1 00:16:40.694 00:16:40.694 ' 00:16:40.694 17:04:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:40.694 17:04:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57042 00:16:40.694 17:04:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57042 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57042 ']' 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.694 17:04:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.694 17:04:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.694 [2024-11-08 17:04:17.351345] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:40.694 [2024-11-08 17:04:17.351491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57042 ] 00:16:40.953 [2024-11-08 17:04:17.521661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.953 [2024-11-08 17:04:17.635978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:16:41.889 17:04:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:16:41.889 17:04:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57042 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57042 ']' 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57042 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57042 00:16:41.889 killing process with pid 57042 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57042' 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@971 -- # kill 57042 00:16:41.889 17:04:18 alias_rpc -- common/autotest_common.sh@976 -- # wait 57042 00:16:43.822 ************************************ 00:16:43.822 END TEST alias_rpc 00:16:43.822 ************************************ 00:16:43.822 00:16:43.822 real 0m3.020s 00:16:43.822 user 0m3.032s 00:16:43.822 sys 0m0.497s 00:16:43.822 17:04:20 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.822 17:04:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.822 17:04:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:16:43.822 17:04:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:43.822 17:04:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:43.822 17:04:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.822 17:04:20 -- common/autotest_common.sh@10 -- # set +x 00:16:43.822 ************************************ 00:16:43.822 START TEST spdkcli_tcp 00:16:43.822 ************************************ 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:43.822 * Looking for test storage... 00:16:43.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.822 17:04:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:43.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.822 --rc genhtml_branch_coverage=1 00:16:43.822 --rc genhtml_function_coverage=1 00:16:43.822 --rc genhtml_legend=1 00:16:43.822 --rc geninfo_all_blocks=1 00:16:43.822 --rc geninfo_unexecuted_blocks=1 00:16:43.822 00:16:43.822 ' 00:16:43.822 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:43.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.823 --rc genhtml_branch_coverage=1 00:16:43.823 --rc genhtml_function_coverage=1 00:16:43.823 --rc genhtml_legend=1 00:16:43.823 --rc geninfo_all_blocks=1 00:16:43.823 --rc geninfo_unexecuted_blocks=1 00:16:43.823 00:16:43.823 ' 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:43.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.823 --rc genhtml_branch_coverage=1 00:16:43.823 --rc genhtml_function_coverage=1 00:16:43.823 --rc genhtml_legend=1 00:16:43.823 --rc geninfo_all_blocks=1 00:16:43.823 --rc geninfo_unexecuted_blocks=1 00:16:43.823 00:16:43.823 ' 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:43.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.823 --rc genhtml_branch_coverage=1 00:16:43.823 --rc genhtml_function_coverage=1 00:16:43.823 --rc genhtml_legend=1 00:16:43.823 --rc geninfo_all_blocks=1 00:16:43.823 --rc geninfo_unexecuted_blocks=1 00:16:43.823 00:16:43.823 ' 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57138 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:43.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.823 17:04:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57138 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57138 ']' 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:43.823 17:04:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.823 [2024-11-08 17:04:20.490588] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:43.823 [2024-11-08 17:04:20.490818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57138 ] 00:16:44.086 [2024-11-08 17:04:20.660849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:44.086 [2024-11-08 17:04:20.778869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.086 [2024-11-08 17:04:20.779109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.025 17:04:21 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:45.025 17:04:21 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:16:45.025 17:04:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57155 00:16:45.025 17:04:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:45.025 17:04:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:45.025 [ 00:16:45.025 "bdev_malloc_delete", 00:16:45.025 "bdev_malloc_create", 00:16:45.025 "bdev_null_resize", 00:16:45.025 "bdev_null_delete", 00:16:45.025 "bdev_null_create", 00:16:45.025 "bdev_nvme_cuse_unregister", 00:16:45.025 "bdev_nvme_cuse_register", 00:16:45.025 "bdev_opal_new_user", 00:16:45.025 "bdev_opal_set_lock_state", 00:16:45.025 "bdev_opal_delete", 00:16:45.025 "bdev_opal_get_info", 00:16:45.025 "bdev_opal_create", 00:16:45.025 "bdev_nvme_opal_revert", 00:16:45.025 "bdev_nvme_opal_init", 00:16:45.025 "bdev_nvme_send_cmd", 00:16:45.025 "bdev_nvme_set_keys", 00:16:45.025 "bdev_nvme_get_path_iostat", 00:16:45.025 "bdev_nvme_get_mdns_discovery_info", 00:16:45.025 "bdev_nvme_stop_mdns_discovery", 00:16:45.025 "bdev_nvme_start_mdns_discovery", 00:16:45.025 "bdev_nvme_set_multipath_policy", 00:16:45.025 "bdev_nvme_set_preferred_path", 00:16:45.025 "bdev_nvme_get_io_paths", 00:16:45.025 "bdev_nvme_remove_error_injection", 00:16:45.025 "bdev_nvme_add_error_injection", 00:16:45.025 "bdev_nvme_get_discovery_info", 00:16:45.025 "bdev_nvme_stop_discovery", 00:16:45.025 "bdev_nvme_start_discovery", 00:16:45.025 "bdev_nvme_get_controller_health_info", 00:16:45.026 "bdev_nvme_disable_controller", 00:16:45.026 "bdev_nvme_enable_controller", 00:16:45.026 "bdev_nvme_reset_controller", 00:16:45.026 "bdev_nvme_get_transport_statistics", 00:16:45.026 "bdev_nvme_apply_firmware", 00:16:45.026 "bdev_nvme_detach_controller", 00:16:45.026 "bdev_nvme_get_controllers", 00:16:45.026 "bdev_nvme_attach_controller", 00:16:45.026 "bdev_nvme_set_hotplug", 00:16:45.026 "bdev_nvme_set_options", 00:16:45.026 "bdev_passthru_delete", 00:16:45.026 "bdev_passthru_create", 00:16:45.026 "bdev_lvol_set_parent_bdev", 00:16:45.026 "bdev_lvol_set_parent", 00:16:45.026 "bdev_lvol_check_shallow_copy", 00:16:45.026 "bdev_lvol_start_shallow_copy", 00:16:45.026 "bdev_lvol_grow_lvstore", 00:16:45.026 "bdev_lvol_get_lvols", 00:16:45.026 "bdev_lvol_get_lvstores", 00:16:45.026 "bdev_lvol_delete", 00:16:45.026 "bdev_lvol_set_read_only", 00:16:45.026 "bdev_lvol_resize", 00:16:45.026 "bdev_lvol_decouple_parent", 00:16:45.026 "bdev_lvol_inflate", 00:16:45.026 "bdev_lvol_rename", 00:16:45.026 "bdev_lvol_clone_bdev", 00:16:45.026 "bdev_lvol_clone", 00:16:45.026 "bdev_lvol_snapshot", 00:16:45.026 "bdev_lvol_create", 00:16:45.026 "bdev_lvol_delete_lvstore", 00:16:45.026 "bdev_lvol_rename_lvstore", 00:16:45.026 "bdev_lvol_create_lvstore", 00:16:45.026 "bdev_raid_set_options", 00:16:45.026 "bdev_raid_remove_base_bdev", 00:16:45.026 "bdev_raid_add_base_bdev", 00:16:45.026 "bdev_raid_delete", 00:16:45.026 "bdev_raid_create", 00:16:45.026 "bdev_raid_get_bdevs", 00:16:45.026 "bdev_error_inject_error", 00:16:45.026 "bdev_error_delete", 00:16:45.026 "bdev_error_create", 00:16:45.026 "bdev_split_delete", 00:16:45.026 "bdev_split_create", 00:16:45.026 "bdev_delay_delete", 00:16:45.026 "bdev_delay_create", 00:16:45.026 "bdev_delay_update_latency", 00:16:45.026 "bdev_zone_block_delete", 00:16:45.026 "bdev_zone_block_create", 00:16:45.026 "blobfs_create", 00:16:45.026 "blobfs_detect", 00:16:45.026 "blobfs_set_cache_size", 00:16:45.026 "bdev_aio_delete", 00:16:45.026 "bdev_aio_rescan", 00:16:45.026 "bdev_aio_create", 00:16:45.026 "bdev_ftl_set_property", 00:16:45.026 "bdev_ftl_get_properties", 00:16:45.026 "bdev_ftl_get_stats", 00:16:45.026 "bdev_ftl_unmap", 00:16:45.026 "bdev_ftl_unload", 00:16:45.026 "bdev_ftl_delete", 00:16:45.026 "bdev_ftl_load", 00:16:45.026 "bdev_ftl_create", 00:16:45.026 "bdev_virtio_attach_controller", 00:16:45.026 "bdev_virtio_scsi_get_devices", 00:16:45.026 "bdev_virtio_detach_controller", 00:16:45.026 "bdev_virtio_blk_set_hotplug", 00:16:45.026 "bdev_iscsi_delete", 00:16:45.026 "bdev_iscsi_create", 00:16:45.026 "bdev_iscsi_set_options", 00:16:45.026 "accel_error_inject_error", 00:16:45.026 "ioat_scan_accel_module", 00:16:45.026 "dsa_scan_accel_module", 00:16:45.026 "iaa_scan_accel_module", 00:16:45.026 "keyring_file_remove_key", 00:16:45.026 "keyring_file_add_key", 00:16:45.026 "keyring_linux_set_options", 00:16:45.026 "fsdev_aio_delete", 00:16:45.026 "fsdev_aio_create", 00:16:45.026 "iscsi_get_histogram", 00:16:45.026 "iscsi_enable_histogram", 00:16:45.026 "iscsi_set_options", 00:16:45.026 "iscsi_get_auth_groups", 00:16:45.026 "iscsi_auth_group_remove_secret", 00:16:45.026 "iscsi_auth_group_add_secret", 00:16:45.026 "iscsi_delete_auth_group", 00:16:45.026 "iscsi_create_auth_group", 00:16:45.026 "iscsi_set_discovery_auth", 00:16:45.026 "iscsi_get_options", 00:16:45.026 "iscsi_target_node_request_logout", 00:16:45.026 "iscsi_target_node_set_redirect", 00:16:45.026 "iscsi_target_node_set_auth", 00:16:45.026 "iscsi_target_node_add_lun", 00:16:45.026 "iscsi_get_stats", 00:16:45.026 "iscsi_get_connections", 00:16:45.026 "iscsi_portal_group_set_auth", 00:16:45.026 "iscsi_start_portal_group", 00:16:45.026 "iscsi_delete_portal_group", 00:16:45.026 "iscsi_create_portal_group", 00:16:45.026 "iscsi_get_portal_groups", 00:16:45.026 "iscsi_delete_target_node", 00:16:45.026 "iscsi_target_node_remove_pg_ig_maps", 00:16:45.026 "iscsi_target_node_add_pg_ig_maps", 00:16:45.026 "iscsi_create_target_node", 00:16:45.026 "iscsi_get_target_nodes", 00:16:45.026 "iscsi_delete_initiator_group", 00:16:45.026 "iscsi_initiator_group_remove_initiators", 00:16:45.026 "iscsi_initiator_group_add_initiators", 00:16:45.026 "iscsi_create_initiator_group", 00:16:45.026 "iscsi_get_initiator_groups", 00:16:45.026 "nvmf_set_crdt", 00:16:45.026 "nvmf_set_config", 00:16:45.026 "nvmf_set_max_subsystems", 00:16:45.026 "nvmf_stop_mdns_prr", 00:16:45.026 "nvmf_publish_mdns_prr", 00:16:45.026 "nvmf_subsystem_get_listeners", 00:16:45.026 "nvmf_subsystem_get_qpairs", 00:16:45.026 "nvmf_subsystem_get_controllers", 00:16:45.026 "nvmf_get_stats", 00:16:45.026 "nvmf_get_transports", 00:16:45.026 "nvmf_create_transport", 00:16:45.026 "nvmf_get_targets", 00:16:45.026 "nvmf_delete_target", 00:16:45.026 "nvmf_create_target", 00:16:45.026 "nvmf_subsystem_allow_any_host", 00:16:45.026 "nvmf_subsystem_set_keys", 00:16:45.026 "nvmf_subsystem_remove_host", 00:16:45.026 "nvmf_subsystem_add_host", 00:16:45.026 "nvmf_ns_remove_host", 00:16:45.026 "nvmf_ns_add_host", 00:16:45.026 "nvmf_subsystem_remove_ns", 00:16:45.026 "nvmf_subsystem_set_ns_ana_group", 00:16:45.026 "nvmf_subsystem_add_ns", 00:16:45.026 "nvmf_subsystem_listener_set_ana_state", 00:16:45.026 "nvmf_discovery_get_referrals", 00:16:45.026 "nvmf_discovery_remove_referral", 00:16:45.026 "nvmf_discovery_add_referral", 00:16:45.026 "nvmf_subsystem_remove_listener", 00:16:45.026 "nvmf_subsystem_add_listener", 00:16:45.026 "nvmf_delete_subsystem", 00:16:45.026 "nvmf_create_subsystem", 00:16:45.026 "nvmf_get_subsystems", 00:16:45.026 "env_dpdk_get_mem_stats", 00:16:45.026 "nbd_get_disks", 00:16:45.026 "nbd_stop_disk", 00:16:45.026 "nbd_start_disk", 00:16:45.026 "ublk_recover_disk", 00:16:45.026 "ublk_get_disks", 00:16:45.026 "ublk_stop_disk", 00:16:45.026 "ublk_start_disk", 00:16:45.026 "ublk_destroy_target", 00:16:45.026 "ublk_create_target", 00:16:45.026 "virtio_blk_create_transport", 00:16:45.026 "virtio_blk_get_transports", 00:16:45.026 "vhost_controller_set_coalescing", 00:16:45.026 "vhost_get_controllers", 00:16:45.026 "vhost_delete_controller", 00:16:45.026 "vhost_create_blk_controller", 00:16:45.026 "vhost_scsi_controller_remove_target", 00:16:45.026 "vhost_scsi_controller_add_target", 00:16:45.026 "vhost_start_scsi_controller", 00:16:45.026 "vhost_create_scsi_controller", 00:16:45.026 "thread_set_cpumask", 00:16:45.026 "scheduler_set_options", 00:16:45.026 "framework_get_governor", 00:16:45.026 "framework_get_scheduler", 00:16:45.026 "framework_set_scheduler", 00:16:45.026 "framework_get_reactors", 00:16:45.026 "thread_get_io_channels", 00:16:45.026 "thread_get_pollers", 00:16:45.026 "thread_get_stats", 00:16:45.026 "framework_monitor_context_switch", 00:16:45.026 "spdk_kill_instance", 00:16:45.026 "log_enable_timestamps", 00:16:45.026 "log_get_flags", 00:16:45.026 "log_clear_flag", 00:16:45.026 "log_set_flag", 00:16:45.026 "log_get_level", 00:16:45.026 "log_set_level", 00:16:45.026 "log_get_print_level", 00:16:45.026 "log_set_print_level", 00:16:45.026 "framework_enable_cpumask_locks", 00:16:45.026 "framework_disable_cpumask_locks", 00:16:45.026 "framework_wait_init", 00:16:45.026 "framework_start_init", 00:16:45.026 "scsi_get_devices", 00:16:45.026 "bdev_get_histogram", 00:16:45.026 "bdev_enable_histogram", 00:16:45.026 "bdev_set_qos_limit", 00:16:45.026 "bdev_set_qd_sampling_period", 00:16:45.026 "bdev_get_bdevs", 00:16:45.026 "bdev_reset_iostat", 00:16:45.026 "bdev_get_iostat", 00:16:45.026 "bdev_examine", 00:16:45.026 "bdev_wait_for_examine", 00:16:45.026 "bdev_set_options", 00:16:45.026 "accel_get_stats", 00:16:45.026 "accel_set_options", 00:16:45.026 "accel_set_driver", 00:16:45.026 "accel_crypto_key_destroy", 00:16:45.026 "accel_crypto_keys_get", 00:16:45.026 "accel_crypto_key_create", 00:16:45.026 "accel_assign_opc", 00:16:45.026 "accel_get_module_info", 00:16:45.026 "accel_get_opc_assignments", 00:16:45.026 "vmd_rescan", 00:16:45.026 "vmd_remove_device", 00:16:45.026 "vmd_enable", 00:16:45.026 "sock_get_default_impl", 00:16:45.026 "sock_set_default_impl", 00:16:45.026 "sock_impl_set_options", 00:16:45.026 "sock_impl_get_options", 00:16:45.026 "iobuf_get_stats", 00:16:45.026 "iobuf_set_options", 00:16:45.026 "keyring_get_keys", 00:16:45.026 "framework_get_pci_devices", 00:16:45.026 "framework_get_config", 00:16:45.026 "framework_get_subsystems", 00:16:45.026 "fsdev_set_opts", 00:16:45.026 "fsdev_get_opts", 00:16:45.026 "trace_get_info", 00:16:45.026 "trace_get_tpoint_group_mask", 00:16:45.026 "trace_disable_tpoint_group", 00:16:45.026 "trace_enable_tpoint_group", 00:16:45.026 "trace_clear_tpoint_mask", 00:16:45.026 "trace_set_tpoint_mask", 00:16:45.026 "notify_get_notifications", 00:16:45.026 "notify_get_types", 00:16:45.026 "spdk_get_version", 00:16:45.026 "rpc_get_methods" 00:16:45.026 ] 00:16:45.026 17:04:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:45.026 17:04:21 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:45.026 17:04:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.026 17:04:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:45.026 17:04:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57138 00:16:45.026 17:04:21 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57138 ']' 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57138 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57138 00:16:45.027 killing process with pid 57138 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57138' 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57138 00:16:45.027 17:04:21 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57138 00:16:46.927 ************************************ 00:16:46.927 END TEST spdkcli_tcp 00:16:46.927 ************************************ 00:16:46.927 00:16:46.927 real 0m3.135s 00:16:46.927 user 0m5.521s 00:16:46.927 sys 0m0.510s 00:16:46.927 17:04:23 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:46.927 17:04:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.927 17:04:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:46.927 17:04:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:46.927 17:04:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:46.927 17:04:23 -- common/autotest_common.sh@10 -- # set +x 00:16:46.927 ************************************ 00:16:46.927 START TEST dpdk_mem_utility 00:16:46.927 ************************************ 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:46.927 * Looking for test storage... 00:16:46.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.927 17:04:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.927 --rc genhtml_branch_coverage=1 00:16:46.927 --rc genhtml_function_coverage=1 00:16:46.927 --rc genhtml_legend=1 00:16:46.927 --rc geninfo_all_blocks=1 00:16:46.927 --rc geninfo_unexecuted_blocks=1 00:16:46.927 00:16:46.927 ' 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.927 --rc genhtml_branch_coverage=1 00:16:46.927 --rc genhtml_function_coverage=1 00:16:46.927 --rc genhtml_legend=1 00:16:46.927 --rc geninfo_all_blocks=1 00:16:46.927 --rc geninfo_unexecuted_blocks=1 00:16:46.927 00:16:46.927 ' 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.927 --rc genhtml_branch_coverage=1 00:16:46.927 --rc genhtml_function_coverage=1 00:16:46.927 --rc genhtml_legend=1 00:16:46.927 --rc geninfo_all_blocks=1 00:16:46.927 --rc geninfo_unexecuted_blocks=1 00:16:46.927 00:16:46.927 ' 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.927 --rc genhtml_branch_coverage=1 00:16:46.927 --rc genhtml_function_coverage=1 00:16:46.927 --rc genhtml_legend=1 00:16:46.927 --rc geninfo_all_blocks=1 00:16:46.927 --rc geninfo_unexecuted_blocks=1 00:16:46.927 00:16:46.927 ' 00:16:46.927 17:04:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:46.927 17:04:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57249 00:16:46.927 17:04:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57249 00:16:46.927 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57249 ']' 00:16:46.928 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.928 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.928 17:04:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:46.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.928 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.928 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.928 17:04:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:47.186 [2024-11-08 17:04:23.646846] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:47.186 [2024-11-08 17:04:23.647170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57249 ] 00:16:47.186 [2024-11-08 17:04:23.802823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.444 [2024-11-08 17:04:23.920732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.040 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:48.040 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:16:48.040 17:04:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:48.040 17:04:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:48.040 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.040 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:48.040 { 00:16:48.040 "filename": "/tmp/spdk_mem_dump.txt" 00:16:48.040 } 00:16:48.040 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.040 17:04:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:48.040 DPDK memory size 816.000000 MiB in 1 heap(s) 00:16:48.040 1 heaps totaling size 816.000000 MiB 00:16:48.040 size: 816.000000 MiB heap id: 0 00:16:48.040 end heaps---------- 00:16:48.040 9 mempools totaling size 595.772034 MiB 00:16:48.040 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:48.040 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:48.040 size: 92.545471 MiB name: bdev_io_57249 00:16:48.040 size: 50.003479 MiB name: msgpool_57249 00:16:48.040 size: 36.509338 MiB name: fsdev_io_57249 00:16:48.040 size: 21.763794 MiB name: PDU_Pool 00:16:48.040 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:48.040 size: 4.133484 MiB name: evtpool_57249 00:16:48.040 size: 0.026123 MiB name: Session_Pool 00:16:48.040 end mempools------- 00:16:48.040 6 memzones totaling size 4.142822 MiB 00:16:48.040 size: 1.000366 MiB name: RG_ring_0_57249 00:16:48.040 size: 1.000366 MiB name: RG_ring_1_57249 00:16:48.040 size: 1.000366 MiB name: RG_ring_4_57249 00:16:48.040 size: 1.000366 MiB name: RG_ring_5_57249 00:16:48.040 size: 0.125366 MiB name: RG_ring_2_57249 00:16:48.040 size: 0.015991 MiB name: RG_ring_3_57249 00:16:48.040 end memzones------- 00:16:48.040 17:04:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:48.040 heap id: 0 total size: 816.000000 MiB number of busy elements: 304 number of free elements: 18 00:16:48.040 list of free elements. size: 16.794067 MiB 00:16:48.040 element at address: 0x200006400000 with size: 1.995972 MiB 00:16:48.040 element at address: 0x20000a600000 with size: 1.995972 MiB 00:16:48.040 element at address: 0x200003e00000 with size: 1.991028 MiB 00:16:48.040 element at address: 0x200018d00040 with size: 0.999939 MiB 00:16:48.040 element at address: 0x200019100040 with size: 0.999939 MiB 00:16:48.040 element at address: 0x200019200000 with size: 0.999084 MiB 00:16:48.040 element at address: 0x200031e00000 with size: 0.994324 MiB 00:16:48.040 element at address: 0x200000400000 with size: 0.992004 MiB 00:16:48.040 element at address: 0x200018a00000 with size: 0.959656 MiB 00:16:48.040 element at address: 0x200019500040 with size: 0.936401 MiB 00:16:48.040 element at address: 0x200000200000 with size: 0.716980 MiB 00:16:48.040 element at address: 0x20001ac00000 with size: 0.562927 MiB 00:16:48.040 element at address: 0x200000c00000 with size: 0.490173 MiB 00:16:48.040 element at address: 0x200018e00000 with size: 0.488220 MiB 00:16:48.040 element at address: 0x200019600000 with size: 0.485413 MiB 00:16:48.040 element at address: 0x200012c00000 with size: 0.443481 MiB 00:16:48.040 element at address: 0x200028000000 with size: 0.391663 MiB 00:16:48.040 element at address: 0x200000800000 with size: 0.350891 MiB 00:16:48.040 list of standard malloc elements. size: 199.285034 MiB 00:16:48.040 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:16:48.040 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:16:48.040 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:16:48.040 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:16:48.040 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:16:48.040 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:16:48.040 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:16:48.040 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:16:48.040 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:16:48.040 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:16:48.040 element at address: 0x200012bff040 with size: 0.000305 MiB 00:16:48.040 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:16:48.040 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:16:48.041 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200000cff000 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff180 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff280 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff380 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff480 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff580 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff680 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff780 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff880 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bff980 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71880 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71980 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c72080 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012c72180 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:16:48.041 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:16:48.041 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:16:48.041 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:16:48.042 element at address: 0x200028064440 with size: 0.000244 MiB 00:16:48.042 element at address: 0x200028064540 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806b200 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806b480 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806b580 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806b680 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806b780 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806b880 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806b980 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806be80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c080 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c180 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c280 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c380 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c480 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c580 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c680 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c780 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c880 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806c980 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d080 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d180 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d280 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d380 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d480 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d580 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d680 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d780 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d880 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806d980 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806da80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806db80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806de80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806df80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e080 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e180 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e280 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e380 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e480 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e580 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e680 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e780 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e880 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806e980 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f080 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f180 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f280 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f380 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f480 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f580 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f680 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f780 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f880 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806f980 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:16:48.042 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:16:48.042 list of memzone associated elements. size: 599.920898 MiB 00:16:48.042 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:16:48.042 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:48.042 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:16:48.042 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:48.042 element at address: 0x200012df4740 with size: 92.045105 MiB 00:16:48.042 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57249_0 00:16:48.042 element at address: 0x200000dff340 with size: 48.003113 MiB 00:16:48.042 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57249_0 00:16:48.042 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:16:48.042 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57249_0 00:16:48.042 element at address: 0x2000197be900 with size: 20.255615 MiB 00:16:48.042 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:48.042 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:16:48.042 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:48.042 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:16:48.042 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57249_0 00:16:48.043 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:16:48.043 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57249 00:16:48.043 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:16:48.043 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57249 00:16:48.043 element at address: 0x200018efde00 with size: 1.008179 MiB 00:16:48.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:48.043 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:16:48.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:48.043 element at address: 0x200018afde00 with size: 1.008179 MiB 00:16:48.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:48.043 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:16:48.043 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:48.043 element at address: 0x200000cff100 with size: 1.000549 MiB 00:16:48.043 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57249 00:16:48.043 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:16:48.043 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57249 00:16:48.043 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:16:48.043 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57249 00:16:48.043 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:16:48.043 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57249 00:16:48.043 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:16:48.043 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57249 00:16:48.043 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:16:48.043 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57249 00:16:48.043 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:16:48.043 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:48.043 element at address: 0x200012c72280 with size: 0.500549 MiB 00:16:48.043 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:48.043 element at address: 0x20001967c440 with size: 0.250549 MiB 00:16:48.043 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:48.043 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:16:48.043 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57249 00:16:48.043 element at address: 0x20000085df80 with size: 0.125549 MiB 00:16:48.043 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57249 00:16:48.043 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:16:48.043 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:48.043 element at address: 0x200028064640 with size: 0.023804 MiB 00:16:48.043 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:48.043 element at address: 0x200000859d40 with size: 0.016174 MiB 00:16:48.043 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57249 00:16:48.043 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:16:48.043 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:48.043 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:16:48.043 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57249 00:16:48.043 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:16:48.043 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57249 00:16:48.043 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:16:48.043 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57249 00:16:48.043 element at address: 0x20002806b300 with size: 0.000366 MiB 00:16:48.043 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:48.043 17:04:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:48.043 17:04:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57249 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57249 ']' 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57249 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57249 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57249' 00:16:48.043 killing process with pid 57249 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57249 00:16:48.043 17:04:24 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57249 00:16:49.946 00:16:49.946 real 0m2.935s 00:16:49.946 user 0m2.918s 00:16:49.946 sys 0m0.461s 00:16:49.946 17:04:26 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:49.946 17:04:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:49.946 ************************************ 00:16:49.946 END TEST dpdk_mem_utility 00:16:49.946 ************************************ 00:16:49.946 17:04:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:49.946 17:04:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:49.946 17:04:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:49.946 17:04:26 -- common/autotest_common.sh@10 -- # set +x 00:16:49.946 ************************************ 00:16:49.946 START TEST event 00:16:49.946 ************************************ 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:49.946 * Looking for test storage... 00:16:49.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1691 -- # lcov --version 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:49.946 17:04:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.946 17:04:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.946 17:04:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.946 17:04:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.946 17:04:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.946 17:04:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.946 17:04:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.946 17:04:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.946 17:04:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.946 17:04:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.946 17:04:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.946 17:04:26 event -- scripts/common.sh@344 -- # case "$op" in 00:16:49.946 17:04:26 event -- scripts/common.sh@345 -- # : 1 00:16:49.946 17:04:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.946 17:04:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.946 17:04:26 event -- scripts/common.sh@365 -- # decimal 1 00:16:49.946 17:04:26 event -- scripts/common.sh@353 -- # local d=1 00:16:49.946 17:04:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.946 17:04:26 event -- scripts/common.sh@355 -- # echo 1 00:16:49.946 17:04:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.946 17:04:26 event -- scripts/common.sh@366 -- # decimal 2 00:16:49.946 17:04:26 event -- scripts/common.sh@353 -- # local d=2 00:16:49.946 17:04:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.946 17:04:26 event -- scripts/common.sh@355 -- # echo 2 00:16:49.946 17:04:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.946 17:04:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.946 17:04:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.946 17:04:26 event -- scripts/common.sh@368 -- # return 0 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.946 --rc genhtml_branch_coverage=1 00:16:49.946 --rc genhtml_function_coverage=1 00:16:49.946 --rc genhtml_legend=1 00:16:49.946 --rc geninfo_all_blocks=1 00:16:49.946 --rc geninfo_unexecuted_blocks=1 00:16:49.946 00:16:49.946 ' 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.946 --rc genhtml_branch_coverage=1 00:16:49.946 --rc genhtml_function_coverage=1 00:16:49.946 --rc genhtml_legend=1 00:16:49.946 --rc geninfo_all_blocks=1 00:16:49.946 --rc geninfo_unexecuted_blocks=1 00:16:49.946 00:16:49.946 ' 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.946 --rc genhtml_branch_coverage=1 00:16:49.946 --rc genhtml_function_coverage=1 00:16:49.946 --rc genhtml_legend=1 00:16:49.946 --rc geninfo_all_blocks=1 00:16:49.946 --rc geninfo_unexecuted_blocks=1 00:16:49.946 00:16:49.946 ' 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:49.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.946 --rc genhtml_branch_coverage=1 00:16:49.946 --rc genhtml_function_coverage=1 00:16:49.946 --rc genhtml_legend=1 00:16:49.946 --rc geninfo_all_blocks=1 00:16:49.946 --rc geninfo_unexecuted_blocks=1 00:16:49.946 00:16:49.946 ' 00:16:49.946 17:04:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:49.946 17:04:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:16:49.946 17:04:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:16:49.946 17:04:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:49.946 17:04:26 event -- common/autotest_common.sh@10 -- # set +x 00:16:49.946 ************************************ 00:16:49.946 START TEST event_perf 00:16:49.946 ************************************ 00:16:49.946 17:04:26 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:49.946 Running I/O for 1 seconds...[2024-11-08 17:04:26.621169] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:49.947 [2024-11-08 17:04:26.621787] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57346 ] 00:16:50.206 [2024-11-08 17:04:26.786434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.467 [2024-11-08 17:04:26.940602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.467 [2024-11-08 17:04:26.941033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.467 [2024-11-08 17:04:26.941455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:50.467 Running I/O for 1 seconds...[2024-11-08 17:04:26.941560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.410 00:16:51.410 lcore 0: 180966 00:16:51.410 lcore 1: 180964 00:16:51.410 lcore 2: 180966 00:16:51.410 lcore 3: 180966 00:16:51.410 done. 00:16:51.669 00:16:51.669 ************************************ 00:16:51.669 END TEST event_perf 00:16:51.669 ************************************ 00:16:51.669 real 0m1.539s 00:16:51.669 user 0m4.309s 00:16:51.669 sys 0m0.101s 00:16:51.669 17:04:28 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:51.669 17:04:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:16:51.669 17:04:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:51.669 17:04:28 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:51.669 17:04:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:51.669 17:04:28 event -- common/autotest_common.sh@10 -- # set +x 00:16:51.669 ************************************ 00:16:51.669 START TEST event_reactor 00:16:51.669 ************************************ 00:16:51.669 17:04:28 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:51.669 [2024-11-08 17:04:28.217886] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:51.669 [2024-11-08 17:04:28.218039] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57390 ] 00:16:51.669 [2024-11-08 17:04:28.381911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.928 [2024-11-08 17:04:28.502886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.345 test_start 00:16:53.345 oneshot 00:16:53.346 tick 100 00:16:53.346 tick 100 00:16:53.346 tick 250 00:16:53.346 tick 100 00:16:53.346 tick 100 00:16:53.346 tick 100 00:16:53.346 tick 250 00:16:53.346 tick 500 00:16:53.346 tick 100 00:16:53.346 tick 100 00:16:53.346 tick 250 00:16:53.346 tick 100 00:16:53.346 tick 100 00:16:53.346 test_end 00:16:53.346 ************************************ 00:16:53.346 END TEST event_reactor 00:16:53.346 00:16:53.346 real 0m1.485s 00:16:53.346 user 0m1.296s 00:16:53.346 sys 0m0.077s 00:16:53.346 17:04:29 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:53.346 17:04:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:16:53.346 ************************************ 00:16:53.346 17:04:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:53.346 17:04:29 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:53.346 17:04:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:53.346 17:04:29 event -- common/autotest_common.sh@10 -- # set +x 00:16:53.346 ************************************ 00:16:53.346 START TEST event_reactor_perf 00:16:53.346 ************************************ 00:16:53.346 17:04:29 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:53.346 [2024-11-08 17:04:29.776017] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:53.346 [2024-11-08 17:04:29.776270] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57422 ] 00:16:53.346 [2024-11-08 17:04:29.936534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.617 [2024-11-08 17:04:30.062263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.559 test_start 00:16:54.559 test_end 00:16:54.559 Performance: 313977 events per second 00:16:54.559 00:16:54.559 real 0m1.486s 00:16:54.559 user 0m1.302s 00:16:54.559 sys 0m0.074s 00:16:54.559 ************************************ 00:16:54.559 END TEST event_reactor_perf 00:16:54.559 ************************************ 00:16:54.559 17:04:31 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:54.559 17:04:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 17:04:31 event -- event/event.sh@49 -- # uname -s 00:16:54.820 17:04:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:54.820 17:04:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:54.820 17:04:31 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:54.820 17:04:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:54.820 17:04:31 event -- common/autotest_common.sh@10 -- # set +x 00:16:54.820 ************************************ 00:16:54.820 START TEST event_scheduler 00:16:54.820 ************************************ 00:16:54.820 17:04:31 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:54.820 * Looking for test storage... 00:16:54.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:16:54.820 17:04:31 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:54.820 17:04:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:54.820 17:04:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:16:54.820 17:04:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.820 17:04:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.821 --rc genhtml_branch_coverage=1 00:16:54.821 --rc genhtml_function_coverage=1 00:16:54.821 --rc genhtml_legend=1 00:16:54.821 --rc geninfo_all_blocks=1 00:16:54.821 --rc geninfo_unexecuted_blocks=1 00:16:54.821 00:16:54.821 ' 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.821 --rc genhtml_branch_coverage=1 00:16:54.821 --rc genhtml_function_coverage=1 00:16:54.821 --rc genhtml_legend=1 00:16:54.821 --rc geninfo_all_blocks=1 00:16:54.821 --rc geninfo_unexecuted_blocks=1 00:16:54.821 00:16:54.821 ' 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.821 --rc genhtml_branch_coverage=1 00:16:54.821 --rc genhtml_function_coverage=1 00:16:54.821 --rc genhtml_legend=1 00:16:54.821 --rc geninfo_all_blocks=1 00:16:54.821 --rc geninfo_unexecuted_blocks=1 00:16:54.821 00:16:54.821 ' 00:16:54.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:54.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.821 --rc genhtml_branch_coverage=1 00:16:54.821 --rc genhtml_function_coverage=1 00:16:54.821 --rc genhtml_legend=1 00:16:54.821 --rc geninfo_all_blocks=1 00:16:54.821 --rc geninfo_unexecuted_blocks=1 00:16:54.821 00:16:54.821 ' 00:16:54.821 17:04:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:54.821 17:04:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57498 00:16:54.821 17:04:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:54.821 17:04:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57498 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 57498 ']' 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:54.821 17:04:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:54.821 17:04:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:54.821 [2024-11-08 17:04:31.520061] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:54.821 [2024-11-08 17:04:31.520876] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57498 ] 00:16:55.082 [2024-11-08 17:04:31.689202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.344 [2024-11-08 17:04:31.849697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.344 [2024-11-08 17:04:31.850114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.344 [2024-11-08 17:04:31.851078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.344 [2024-11-08 17:04:31.852439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.915 17:04:32 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:55.915 17:04:32 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:16:55.915 17:04:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:55.915 17:04:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.915 17:04:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:55.915 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:55.915 POWER: Cannot set governor of lcore 0 to userspace 00:16:55.915 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:55.915 POWER: Cannot set governor of lcore 0 to performance 00:16:55.915 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:55.915 POWER: Cannot set governor of lcore 0 to userspace 00:16:55.915 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:55.915 POWER: Cannot set governor of lcore 0 to userspace 00:16:55.915 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:16:55.915 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:16:55.915 POWER: Unable to set Power Management Environment for lcore 0 00:16:55.915 [2024-11-08 17:04:32.401235] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:16:55.915 [2024-11-08 17:04:32.401320] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:16:55.915 [2024-11-08 17:04:32.401425] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:16:55.915 [2024-11-08 17:04:32.401541] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:16:55.915 [2024-11-08 17:04:32.401594] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:16:55.915 [2024-11-08 17:04:32.401733] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:16:55.915 17:04:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.915 17:04:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:55.915 17:04:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.915 17:04:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 [2024-11-08 17:04:32.699805] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:56.173 17:04:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:56.173 17:04:32 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:56.173 17:04:32 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 ************************************ 00:16:56.173 START TEST scheduler_create_thread 00:16:56.173 ************************************ 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 2 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 3 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 4 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 5 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 6 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 7 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 8 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 9 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 10 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.173 17:04:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:57.109 17:04:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.109 17:04:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:57.109 17:04:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:57.109 17:04:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.109 17:04:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:58.495 ************************************ 00:16:58.495 END TEST scheduler_create_thread 00:16:58.495 ************************************ 00:16:58.495 17:04:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.495 00:16:58.495 real 0m2.137s 00:16:58.495 user 0m0.015s 00:16:58.495 sys 0m0.006s 00:16:58.495 17:04:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:58.495 17:04:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:16:58.495 17:04:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:58.495 17:04:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57498 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 57498 ']' 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 57498 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57498 00:16:58.495 killing process with pid 57498 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57498' 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 57498 00:16:58.495 17:04:34 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 57498 00:16:58.755 [2024-11-08 17:04:35.331611] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:59.696 00:16:59.696 real 0m4.933s 00:16:59.696 user 0m8.117s 00:16:59.696 sys 0m0.465s 00:16:59.696 17:04:36 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:59.696 17:04:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:16:59.697 ************************************ 00:16:59.697 END TEST event_scheduler 00:16:59.697 ************************************ 00:16:59.697 17:04:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:16:59.697 17:04:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:59.697 17:04:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:59.697 17:04:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:59.697 17:04:36 event -- common/autotest_common.sh@10 -- # set +x 00:16:59.697 ************************************ 00:16:59.697 START TEST app_repeat 00:16:59.697 ************************************ 00:16:59.697 17:04:36 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:16:59.697 Process app_repeat pid: 57598 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57598 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57598' 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:16:59.697 spdk_app_start Round 0 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:59.697 17:04:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57598 /var/tmp/spdk-nbd.sock 00:16:59.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:59.697 17:04:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57598 ']' 00:16:59.697 17:04:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:59.697 17:04:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:59.697 17:04:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:59.697 17:04:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:59.697 17:04:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:16:59.697 [2024-11-08 17:04:36.364291] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:16:59.697 [2024-11-08 17:04:36.364727] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57598 ] 00:16:59.957 [2024-11-08 17:04:36.534068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:59.957 [2024-11-08 17:04:36.657192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.957 [2024-11-08 17:04:36.657377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.890 17:04:37 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:00.890 17:04:37 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:17:00.890 17:04:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:00.890 Malloc0 00:17:00.890 17:04:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:01.147 Malloc1 00:17:01.147 17:04:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.147 17:04:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:01.405 /dev/nbd0 00:17:01.405 17:04:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:01.405 17:04:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:01.405 1+0 records in 00:17:01.405 1+0 records out 00:17:01.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418878 s, 9.8 MB/s 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:01.405 17:04:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:17:01.405 17:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.405 17:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.405 17:04:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:01.663 /dev/nbd1 00:17:01.663 17:04:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:01.663 17:04:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:01.663 1+0 records in 00:17:01.663 1+0 records out 00:17:01.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385288 s, 10.6 MB/s 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:01.663 17:04:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:17:01.663 17:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.663 17:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.663 17:04:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:01.663 17:04:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:01.663 17:04:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:01.922 { 00:17:01.922 "nbd_device": "/dev/nbd0", 00:17:01.922 "bdev_name": "Malloc0" 00:17:01.922 }, 00:17:01.922 { 00:17:01.922 "nbd_device": "/dev/nbd1", 00:17:01.922 "bdev_name": "Malloc1" 00:17:01.922 } 00:17:01.922 ]' 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:01.922 { 00:17:01.922 "nbd_device": "/dev/nbd0", 00:17:01.922 "bdev_name": "Malloc0" 00:17:01.922 }, 00:17:01.922 { 00:17:01.922 "nbd_device": "/dev/nbd1", 00:17:01.922 "bdev_name": "Malloc1" 00:17:01.922 } 00:17:01.922 ]' 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:01.922 /dev/nbd1' 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:01.922 /dev/nbd1' 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:01.922 256+0 records in 00:17:01.922 256+0 records out 00:17:01.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111607 s, 94.0 MB/s 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:01.922 256+0 records in 00:17:01.922 256+0 records out 00:17:01.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0704577 s, 14.9 MB/s 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:01.922 17:04:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:02.180 256+0 records in 00:17:02.180 256+0 records out 00:17:02.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227031 s, 46.2 MB/s 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.180 17:04:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:02.438 17:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.439 17:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:02.696 17:04:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:02.696 17:04:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:03.262 17:04:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:03.827 [2024-11-08 17:04:40.492235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:04.088 [2024-11-08 17:04:40.609459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.088 [2024-11-08 17:04:40.609618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.088 [2024-11-08 17:04:40.746179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:04.088 [2024-11-08 17:04:40.746280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:06.635 spdk_app_start Round 1 00:17:06.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:06.635 17:04:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:06.635 17:04:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:17:06.635 17:04:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57598 /var/tmp/spdk-nbd.sock 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57598 ']' 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:06.635 17:04:42 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:17:06.635 17:04:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:06.635 Malloc0 00:17:06.635 17:04:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:06.893 Malloc1 00:17:06.893 17:04:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.893 17:04:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:07.151 /dev/nbd0 00:17:07.151 17:04:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.151 17:04:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:07.151 1+0 records in 00:17:07.151 1+0 records out 00:17:07.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752928 s, 5.4 MB/s 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:07.151 17:04:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:17:07.151 17:04:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.151 17:04:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.151 17:04:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:07.410 /dev/nbd1 00:17:07.410 17:04:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:07.410 17:04:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:07.410 17:04:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:07.410 17:04:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:17:07.410 17:04:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:07.410 17:04:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:07.410 17:04:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:07.410 1+0 records in 00:17:07.410 1+0 records out 00:17:07.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401245 s, 10.2 MB/s 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:07.410 17:04:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:17:07.410 17:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.410 17:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.410 17:04:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:07.410 17:04:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.410 17:04:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:07.670 { 00:17:07.670 "nbd_device": "/dev/nbd0", 00:17:07.670 "bdev_name": "Malloc0" 00:17:07.670 }, 00:17:07.670 { 00:17:07.670 "nbd_device": "/dev/nbd1", 00:17:07.670 "bdev_name": "Malloc1" 00:17:07.670 } 00:17:07.670 ]' 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:07.670 { 00:17:07.670 "nbd_device": "/dev/nbd0", 00:17:07.670 "bdev_name": "Malloc0" 00:17:07.670 }, 00:17:07.670 { 00:17:07.670 "nbd_device": "/dev/nbd1", 00:17:07.670 "bdev_name": "Malloc1" 00:17:07.670 } 00:17:07.670 ]' 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:07.670 /dev/nbd1' 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:07.670 /dev/nbd1' 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:07.670 17:04:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:07.671 256+0 records in 00:17:07.671 256+0 records out 00:17:07.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675375 s, 155 MB/s 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:07.671 256+0 records in 00:17:07.671 256+0 records out 00:17:07.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170568 s, 61.5 MB/s 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:07.671 256+0 records in 00:17:07.671 256+0 records out 00:17:07.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209786 s, 50.0 MB/s 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.671 17:04:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.932 17:04:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.195 17:04:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:08.455 17:04:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:08.455 17:04:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:09.027 17:04:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:09.599 [2024-11-08 17:04:46.201962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.862 [2024-11-08 17:04:46.330502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.863 [2024-11-08 17:04:46.330511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.863 [2024-11-08 17:04:46.469428] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:09.863 [2024-11-08 17:04:46.469519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:11.774 spdk_app_start Round 2 00:17:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:11.774 17:04:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:11.774 17:04:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:17:11.774 17:04:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57598 /var/tmp/spdk-nbd.sock 00:17:11.774 17:04:48 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57598 ']' 00:17:11.774 17:04:48 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:11.774 17:04:48 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.774 17:04:48 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:11.774 17:04:48 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.774 17:04:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:12.035 17:04:48 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.035 17:04:48 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:17:12.035 17:04:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:12.297 Malloc0 00:17:12.297 17:04:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:12.559 Malloc1 00:17:12.559 17:04:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.559 17:04:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:12.821 /dev/nbd0 00:17:12.821 17:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.821 17:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:12.821 1+0 records in 00:17:12.821 1+0 records out 00:17:12.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361386 s, 11.3 MB/s 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:12.821 17:04:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:17:12.821 17:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.821 17:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.822 17:04:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:13.084 /dev/nbd1 00:17:13.084 17:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:13.084 17:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:13.084 1+0 records in 00:17:13.084 1+0 records out 00:17:13.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276016 s, 14.8 MB/s 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:13.084 17:04:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:17:13.084 17:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.084 17:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.084 17:04:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:13.084 17:04:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.084 17:04:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:13.345 17:04:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:13.345 { 00:17:13.345 "nbd_device": "/dev/nbd0", 00:17:13.345 "bdev_name": "Malloc0" 00:17:13.345 }, 00:17:13.345 { 00:17:13.345 "nbd_device": "/dev/nbd1", 00:17:13.345 "bdev_name": "Malloc1" 00:17:13.345 } 00:17:13.345 ]' 00:17:13.345 17:04:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:13.345 { 00:17:13.345 "nbd_device": "/dev/nbd0", 00:17:13.345 "bdev_name": "Malloc0" 00:17:13.345 }, 00:17:13.345 { 00:17:13.345 "nbd_device": "/dev/nbd1", 00:17:13.345 "bdev_name": "Malloc1" 00:17:13.345 } 00:17:13.345 ]' 00:17:13.345 17:04:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:13.345 /dev/nbd1' 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:13.345 /dev/nbd1' 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:13.345 256+0 records in 00:17:13.345 256+0 records out 00:17:13.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123865 s, 84.7 MB/s 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:13.345 17:04:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:13.605 256+0 records in 00:17:13.605 256+0 records out 00:17:13.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180925 s, 58.0 MB/s 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:13.605 256+0 records in 00:17:13.605 256+0 records out 00:17:13.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207491 s, 50.5 MB/s 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.605 17:04:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:13.606 17:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.866 17:04:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:14.126 17:04:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:14.126 17:04:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:14.697 17:04:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:15.641 [2024-11-08 17:04:51.987256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.641 [2024-11-08 17:04:52.147067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.641 [2024-11-08 17:04:52.147244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.641 [2024-11-08 17:04:52.320441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:15.641 [2024-11-08 17:04:52.320888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:17.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:17.568 17:04:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57598 /var/tmp/spdk-nbd.sock 00:17:17.568 17:04:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 57598 ']' 00:17:17.568 17:04:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:17.568 17:04:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:17.568 17:04:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:17.568 17:04:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:17.568 17:04:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:17:17.830 17:04:54 event.app_repeat -- event/event.sh@39 -- # killprocess 57598 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 57598 ']' 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 57598 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57598 00:17:17.830 killing process with pid 57598 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57598' 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@971 -- # kill 57598 00:17:17.830 17:04:54 event.app_repeat -- common/autotest_common.sh@976 -- # wait 57598 00:17:18.794 spdk_app_start is called in Round 0. 00:17:18.794 Shutdown signal received, stop current app iteration 00:17:18.794 Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 reinitialization... 00:17:18.794 spdk_app_start is called in Round 1. 00:17:18.794 Shutdown signal received, stop current app iteration 00:17:18.794 Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 reinitialization... 00:17:18.794 spdk_app_start is called in Round 2. 00:17:18.794 Shutdown signal received, stop current app iteration 00:17:18.794 Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 reinitialization... 00:17:18.794 spdk_app_start is called in Round 3. 00:17:18.794 Shutdown signal received, stop current app iteration 00:17:18.794 ************************************ 00:17:18.794 END TEST app_repeat 00:17:18.794 ************************************ 00:17:18.794 17:04:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:17:18.794 17:04:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:17:18.794 00:17:18.794 real 0m18.902s 00:17:18.794 user 0m40.761s 00:17:18.794 sys 0m2.571s 00:17:18.794 17:04:55 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.794 17:04:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:18.794 17:04:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:17:18.794 17:04:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:18.794 17:04:55 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:18.794 17:04:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.794 17:04:55 event -- common/autotest_common.sh@10 -- # set +x 00:17:18.794 ************************************ 00:17:18.794 START TEST cpu_locks 00:17:18.794 ************************************ 00:17:18.794 17:04:55 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:18.794 * Looking for test storage... 00:17:18.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:18.794 17:04:55 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.794 17:04:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.794 17:04:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:18.794 17:04:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:17:18.794 17:04:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.795 17:04:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.795 17:04:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.795 17:04:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.795 --rc genhtml_branch_coverage=1 00:17:18.795 --rc genhtml_function_coverage=1 00:17:18.795 --rc genhtml_legend=1 00:17:18.795 --rc geninfo_all_blocks=1 00:17:18.795 --rc geninfo_unexecuted_blocks=1 00:17:18.795 00:17:18.795 ' 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.795 --rc genhtml_branch_coverage=1 00:17:18.795 --rc genhtml_function_coverage=1 00:17:18.795 --rc genhtml_legend=1 00:17:18.795 --rc geninfo_all_blocks=1 00:17:18.795 --rc geninfo_unexecuted_blocks=1 00:17:18.795 00:17:18.795 ' 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.795 --rc genhtml_branch_coverage=1 00:17:18.795 --rc genhtml_function_coverage=1 00:17:18.795 --rc genhtml_legend=1 00:17:18.795 --rc geninfo_all_blocks=1 00:17:18.795 --rc geninfo_unexecuted_blocks=1 00:17:18.795 00:17:18.795 ' 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.795 --rc genhtml_branch_coverage=1 00:17:18.795 --rc genhtml_function_coverage=1 00:17:18.795 --rc genhtml_legend=1 00:17:18.795 --rc geninfo_all_blocks=1 00:17:18.795 --rc geninfo_unexecuted_blocks=1 00:17:18.795 00:17:18.795 ' 00:17:18.795 17:04:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:17:18.795 17:04:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:17:18.795 17:04:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:17:18.795 17:04:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.795 17:04:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:18.795 ************************************ 00:17:18.795 START TEST default_locks 00:17:18.795 ************************************ 00:17:18.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58040 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58040 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58040 ']' 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:18.795 17:04:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:19.056 [2024-11-08 17:04:55.587645] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:19.056 [2024-11-08 17:04:55.587851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:17:19.056 [2024-11-08 17:04:55.756861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.317 [2024-11-08 17:04:55.925120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58040 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58040 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58040 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58040 ']' 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58040 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:20.253 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58040 00:17:20.511 killing process with pid 58040 00:17:20.511 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:20.511 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:20.511 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58040' 00:17:20.511 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58040 00:17:20.511 17:04:56 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58040 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58040 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58040 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:17:21.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.942 ERROR: process (pid: 58040) is no longer running 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58040 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58040 ']' 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:21.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58040) - No such process 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:21.942 00:17:21.942 real 0m3.140s 00:17:21.942 user 0m2.929s 00:17:21.942 sys 0m0.691s 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:21.942 17:04:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:21.942 ************************************ 00:17:21.942 END TEST default_locks 00:17:21.942 ************************************ 00:17:22.203 17:04:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:17:22.203 17:04:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:22.204 17:04:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:22.204 17:04:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:22.204 ************************************ 00:17:22.204 START TEST default_locks_via_rpc 00:17:22.204 ************************************ 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58104 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58104 00:17:22.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58104 ']' 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.204 17:04:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:22.204 [2024-11-08 17:04:58.759606] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:22.204 [2024-11-08 17:04:58.759750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58104 ] 00:17:22.463 [2024-11-08 17:04:58.924101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.463 [2024-11-08 17:04:59.056328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:17:23.027 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.028 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.028 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.028 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58104 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58104 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58104 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58104 ']' 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58104 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58104 00:17:23.285 killing process with pid 58104 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58104' 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58104 00:17:23.285 17:04:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58104 00:17:25.189 00:17:25.189 real 0m2.953s 00:17:25.189 user 0m2.896s 00:17:25.189 sys 0m0.499s 00:17:25.189 17:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:25.189 17:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.189 ************************************ 00:17:25.189 END TEST default_locks_via_rpc 00:17:25.189 ************************************ 00:17:25.189 17:05:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:17:25.189 17:05:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:25.189 17:05:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:25.189 17:05:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:25.189 ************************************ 00:17:25.189 START TEST non_locking_app_on_locked_coremask 00:17:25.189 ************************************ 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58167 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58167 /var/tmp/spdk.sock 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58167 ']' 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:25.189 17:05:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:25.189 [2024-11-08 17:05:01.834243] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:25.189 [2024-11-08 17:05:01.834438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58167 ] 00:17:25.449 [2024-11-08 17:05:02.014772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.710 [2024-11-08 17:05:02.179431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58183 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58183 /var/tmp/spdk2.sock 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58183 ']' 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:26.652 17:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:17:26.652 [2024-11-08 17:05:03.094213] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:26.652 [2024-11-08 17:05:03.094382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58183 ] 00:17:26.652 [2024-11-08 17:05:03.281166] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:26.652 [2024-11-08 17:05:03.281261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.913 [2024-11-08 17:05:03.611333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58167 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58167 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58167 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58167 ']' 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58167 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:29.521 17:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58167 00:17:29.521 killing process with pid 58167 00:17:29.521 17:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:29.521 17:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:29.521 17:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58167' 00:17:29.521 17:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58167 00:17:29.521 17:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58167 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58183 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58183 ']' 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58183 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58183 00:17:32.818 killing process with pid 58183 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58183' 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58183 00:17:32.818 17:05:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58183 00:17:34.734 ************************************ 00:17:34.734 END TEST non_locking_app_on_locked_coremask 00:17:34.734 ************************************ 00:17:34.734 00:17:34.734 real 0m9.451s 00:17:34.734 user 0m9.514s 00:17:34.734 sys 0m1.347s 00:17:34.734 17:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:34.734 17:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:34.734 17:05:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:17:34.734 17:05:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:34.734 17:05:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:34.734 17:05:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:34.734 ************************************ 00:17:34.734 START TEST locking_app_on_unlocked_coremask 00:17:34.734 ************************************ 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58314 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58314 /var/tmp/spdk.sock 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58314 ']' 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:17:34.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:34.734 17:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:34.734 [2024-11-08 17:05:11.333729] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:34.734 [2024-11-08 17:05:11.333960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58314 ] 00:17:34.995 [2024-11-08 17:05:11.505357] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:34.995 [2024-11-08 17:05:11.505470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.995 [2024-11-08 17:05:11.693406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58331 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58331 /var/tmp/spdk2.sock 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58331 ']' 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:35.940 17:05:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:36.202 [2024-11-08 17:05:12.656654] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:36.202 [2024-11-08 17:05:12.657103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58331 ] 00:17:36.203 [2024-11-08 17:05:12.843036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.777 [2024-11-08 17:05:13.179422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.690 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:38.690 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:17:38.690 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58331 00:17:38.690 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:38.690 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58331 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58314 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58314 ']' 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58314 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58314 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:39.263 killing process with pid 58314 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58314' 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58314 00:17:39.263 17:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58314 00:17:43.480 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58331 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58331 ']' 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58331 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58331 00:17:43.481 killing process with pid 58331 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58331' 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58331 00:17:43.481 17:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58331 00:17:44.914 00:17:44.914 real 0m10.248s 00:17:44.914 user 0m10.325s 00:17:44.914 sys 0m1.457s 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:44.914 ************************************ 00:17:44.914 END TEST locking_app_on_unlocked_coremask 00:17:44.914 ************************************ 00:17:44.914 17:05:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:17:44.914 17:05:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:44.914 17:05:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:44.914 17:05:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:44.914 ************************************ 00:17:44.914 START TEST locking_app_on_locked_coremask 00:17:44.914 ************************************ 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58468 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58468 /var/tmp/spdk.sock 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58468 ']' 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:44.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:44.914 17:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:45.199 [2024-11-08 17:05:21.661019] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:45.199 [2024-11-08 17:05:21.661232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58468 ] 00:17:45.199 [2024-11-08 17:05:21.835250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.461 [2024-11-08 17:05:22.003143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58484 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58484 /var/tmp/spdk2.sock 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58484 /var/tmp/spdk2.sock 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58484 /var/tmp/spdk2.sock 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58484 ']' 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:46.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:46.404 17:05:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:46.404 [2024-11-08 17:05:22.947226] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:46.404 [2024-11-08 17:05:22.947721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58484 ] 00:17:46.666 [2024-11-08 17:05:23.137056] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58468 has claimed it. 00:17:46.666 [2024-11-08 17:05:23.137182] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:46.931 ERROR: process (pid: 58484) is no longer running 00:17:46.931 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58484) - No such process 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58468 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58468 00:17:46.931 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58468 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58468 ']' 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58468 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58468 00:17:47.192 killing process with pid 58468 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58468' 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58468 00:17:47.192 17:05:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58468 00:17:49.090 ************************************ 00:17:49.090 END TEST locking_app_on_locked_coremask 00:17:49.090 ************************************ 00:17:49.090 00:17:49.090 real 0m3.973s 00:17:49.090 user 0m4.082s 00:17:49.090 sys 0m0.851s 00:17:49.090 17:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.090 17:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:49.090 17:05:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:17:49.090 17:05:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:49.090 17:05:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:49.090 17:05:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:49.090 ************************************ 00:17:49.090 START TEST locking_overlapped_coremask 00:17:49.090 ************************************ 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:17:49.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58543 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58543 /var/tmp/spdk.sock 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58543 ']' 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:49.090 17:05:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:49.090 [2024-11-08 17:05:25.678656] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:49.090 [2024-11-08 17:05:25.678823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58543 ] 00:17:49.348 [2024-11-08 17:05:25.840568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:49.348 [2024-11-08 17:05:25.965997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.348 [2024-11-08 17:05:25.966317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.348 [2024-11-08 17:05:25.966495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58566 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58566 /var/tmp/spdk2.sock 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58566 /var/tmp/spdk2.sock 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58566 /var/tmp/spdk2.sock 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58566 ']' 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:50.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.279 17:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:50.279 [2024-11-08 17:05:26.729019] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:50.279 [2024-11-08 17:05:26.729391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58566 ] 00:17:50.279 [2024-11-08 17:05:26.906580] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58543 has claimed it. 00:17:50.279 [2024-11-08 17:05:26.906671] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:50.845 ERROR: process (pid: 58566) is no longer running 00:17:50.845 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58566) - No such process 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58543 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58543 ']' 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58543 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58543 00:17:50.846 killing process with pid 58543 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58543' 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58543 00:17:50.846 17:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58543 00:17:52.743 ************************************ 00:17:52.743 END TEST locking_overlapped_coremask 00:17:52.743 ************************************ 00:17:52.743 00:17:52.743 real 0m3.471s 00:17:52.743 user 0m9.245s 00:17:52.743 sys 0m0.547s 00:17:52.743 17:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:52.743 17:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:52.743 17:05:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:52.743 17:05:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:52.744 17:05:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:52.744 17:05:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:52.744 ************************************ 00:17:52.744 START TEST locking_overlapped_coremask_via_rpc 00:17:52.744 ************************************ 00:17:52.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58619 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58619 /var/tmp/spdk.sock 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58619 ']' 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:52.744 17:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.744 [2024-11-08 17:05:29.219798] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:52.744 [2024-11-08 17:05:29.219951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58619 ] 00:17:52.744 [2024-11-08 17:05:29.385024] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:52.744 [2024-11-08 17:05:29.385108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:53.002 [2024-11-08 17:05:29.511122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.002 [2024-11-08 17:05:29.511418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.002 [2024-11-08 17:05:29.511502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58637 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58637 /var/tmp/spdk2.sock 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58637 ']' 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:53.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:53.567 17:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.825 [2024-11-08 17:05:30.283428] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:53.825 [2024-11-08 17:05:30.283767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58637 ] 00:17:53.825 [2024-11-08 17:05:30.460580] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:53.825 [2024-11-08 17:05:30.460671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.084 [2024-11-08 17:05:30.722528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.084 [2024-11-08 17:05:30.722688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.084 [2024-11-08 17:05:30.722708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.457 [2024-11-08 17:05:32.043976] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58619 has claimed it. 00:17:55.457 request: 00:17:55.457 { 00:17:55.457 "method": "framework_enable_cpumask_locks", 00:17:55.457 "req_id": 1 00:17:55.457 } 00:17:55.457 Got JSON-RPC error response 00:17:55.457 response: 00:17:55.457 { 00:17:55.457 "code": -32603, 00:17:55.457 "message": "Failed to claim CPU core: 2" 00:17:55.457 } 00:17:55.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58619 /var/tmp/spdk.sock 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58619 ']' 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.457 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58637 /var/tmp/spdk2.sock 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58637 ']' 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:55.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.715 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:55.973 00:17:55.973 real 0m3.429s 00:17:55.973 user 0m1.251s 00:17:55.973 sys 0m0.156s 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.973 17:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 ************************************ 00:17:55.973 END TEST locking_overlapped_coremask_via_rpc 00:17:55.973 ************************************ 00:17:55.973 17:05:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:17:55.973 17:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58619 ]] 00:17:55.973 17:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58619 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58619 ']' 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58619 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58619 00:17:55.973 killing process with pid 58619 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58619' 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58619 00:17:55.973 17:05:32 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58619 00:17:57.871 17:05:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58637 ]] 00:17:57.871 17:05:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58637 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58637 ']' 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58637 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58637 00:17:57.871 killing process with pid 58637 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58637' 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58637 00:17:57.871 17:05:34 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58637 00:17:59.266 17:05:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:59.266 Process with pid 58619 is not found 00:17:59.266 17:05:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:17:59.266 17:05:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58619 ]] 00:17:59.266 17:05:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58619 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58619 ']' 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58619 00:17:59.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58619) - No such process 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58619 is not found' 00:17:59.266 17:05:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58637 ]] 00:17:59.266 17:05:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58637 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58637 ']' 00:17:59.266 Process with pid 58637 is not found 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58637 00:17:59.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58637) - No such process 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58637 is not found' 00:17:59.266 17:05:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:17:59.266 ************************************ 00:17:59.266 END TEST cpu_locks 00:17:59.266 ************************************ 00:17:59.266 00:17:59.266 real 0m40.654s 00:17:59.266 user 1m5.641s 00:17:59.266 sys 0m6.564s 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:59.266 17:05:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:59.524 ************************************ 00:17:59.524 END TEST event 00:17:59.524 ************************************ 00:17:59.524 00:17:59.524 real 1m9.573s 00:17:59.524 user 2m1.597s 00:17:59.524 sys 0m10.115s 00:17:59.524 17:05:35 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:59.524 17:05:35 event -- common/autotest_common.sh@10 -- # set +x 00:17:59.524 17:05:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:59.524 17:05:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:59.524 17:05:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:59.524 17:05:36 -- common/autotest_common.sh@10 -- # set +x 00:17:59.524 ************************************ 00:17:59.524 START TEST thread 00:17:59.524 ************************************ 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:59.524 * Looking for test storage... 00:17:59.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:59.524 17:05:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.524 17:05:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.524 17:05:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.524 17:05:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.524 17:05:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.524 17:05:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.524 17:05:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.524 17:05:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.524 17:05:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.524 17:05:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.524 17:05:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.524 17:05:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:17:59.524 17:05:36 thread -- scripts/common.sh@345 -- # : 1 00:17:59.524 17:05:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.524 17:05:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.524 17:05:36 thread -- scripts/common.sh@365 -- # decimal 1 00:17:59.524 17:05:36 thread -- scripts/common.sh@353 -- # local d=1 00:17:59.524 17:05:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.524 17:05:36 thread -- scripts/common.sh@355 -- # echo 1 00:17:59.524 17:05:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.524 17:05:36 thread -- scripts/common.sh@366 -- # decimal 2 00:17:59.524 17:05:36 thread -- scripts/common.sh@353 -- # local d=2 00:17:59.524 17:05:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.524 17:05:36 thread -- scripts/common.sh@355 -- # echo 2 00:17:59.524 17:05:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.524 17:05:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.524 17:05:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.524 17:05:36 thread -- scripts/common.sh@368 -- # return 0 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.524 --rc genhtml_branch_coverage=1 00:17:59.524 --rc genhtml_function_coverage=1 00:17:59.524 --rc genhtml_legend=1 00:17:59.524 --rc geninfo_all_blocks=1 00:17:59.524 --rc geninfo_unexecuted_blocks=1 00:17:59.524 00:17:59.524 ' 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.524 --rc genhtml_branch_coverage=1 00:17:59.524 --rc genhtml_function_coverage=1 00:17:59.524 --rc genhtml_legend=1 00:17:59.524 --rc geninfo_all_blocks=1 00:17:59.524 --rc geninfo_unexecuted_blocks=1 00:17:59.524 00:17:59.524 ' 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.524 --rc genhtml_branch_coverage=1 00:17:59.524 --rc genhtml_function_coverage=1 00:17:59.524 --rc genhtml_legend=1 00:17:59.524 --rc geninfo_all_blocks=1 00:17:59.524 --rc geninfo_unexecuted_blocks=1 00:17:59.524 00:17:59.524 ' 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.524 --rc genhtml_branch_coverage=1 00:17:59.524 --rc genhtml_function_coverage=1 00:17:59.524 --rc genhtml_legend=1 00:17:59.524 --rc geninfo_all_blocks=1 00:17:59.524 --rc geninfo_unexecuted_blocks=1 00:17:59.524 00:17:59.524 ' 00:17:59.524 17:05:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:59.524 17:05:36 thread -- common/autotest_common.sh@10 -- # set +x 00:17:59.524 ************************************ 00:17:59.524 START TEST thread_poller_perf 00:17:59.524 ************************************ 00:17:59.524 17:05:36 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:59.782 [2024-11-08 17:05:36.251670] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:17:59.782 [2024-11-08 17:05:36.251970] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58808 ] 00:17:59.782 [2024-11-08 17:05:36.406002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.039 [2024-11-08 17:05:36.524837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.039 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:18:01.411 [2024-11-08T17:05:38.126Z] ====================================== 00:18:01.411 [2024-11-08T17:05:38.126Z] busy:2616363512 (cyc) 00:18:01.411 [2024-11-08T17:05:38.126Z] total_run_count: 307000 00:18:01.411 [2024-11-08T17:05:38.126Z] tsc_hz: 2600000000 (cyc) 00:18:01.411 [2024-11-08T17:05:38.126Z] ====================================== 00:18:01.411 [2024-11-08T17:05:38.126Z] poller_cost: 8522 (cyc), 3277 (nsec) 00:18:01.411 00:18:01.411 real 0m1.488s 00:18:01.411 ************************************ 00:18:01.411 user 0m1.306s 00:18:01.411 sys 0m0.073s 00:18:01.411 17:05:37 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:01.411 17:05:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:01.411 END TEST thread_poller_perf 00:18:01.411 ************************************ 00:18:01.411 17:05:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:01.411 17:05:37 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:18:01.411 17:05:37 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:01.411 17:05:37 thread -- common/autotest_common.sh@10 -- # set +x 00:18:01.411 ************************************ 00:18:01.411 START TEST thread_poller_perf 00:18:01.411 ************************************ 00:18:01.411 17:05:37 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:01.411 [2024-11-08 17:05:37.810040] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:01.411 [2024-11-08 17:05:37.810157] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58839 ] 00:18:01.411 [2024-11-08 17:05:37.972191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.411 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:18:01.411 [2024-11-08 17:05:38.090582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.783 [2024-11-08T17:05:39.498Z] ====================================== 00:18:02.783 [2024-11-08T17:05:39.498Z] busy:2604458016 (cyc) 00:18:02.783 [2024-11-08T17:05:39.498Z] total_run_count: 3971000 00:18:02.783 [2024-11-08T17:05:39.498Z] tsc_hz: 2600000000 (cyc) 00:18:02.783 [2024-11-08T17:05:39.498Z] ====================================== 00:18:02.783 [2024-11-08T17:05:39.498Z] poller_cost: 655 (cyc), 251 (nsec) 00:18:02.783 00:18:02.783 real 0m1.477s 00:18:02.783 user 0m1.289s 00:18:02.783 sys 0m0.080s 00:18:02.783 ************************************ 00:18:02.783 END TEST thread_poller_perf 00:18:02.783 ************************************ 00:18:02.783 17:05:39 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:02.783 17:05:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:02.783 17:05:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:18:02.783 ************************************ 00:18:02.783 END TEST thread 00:18:02.783 ************************************ 00:18:02.783 00:18:02.783 real 0m3.264s 00:18:02.783 user 0m2.723s 00:18:02.783 sys 0m0.274s 00:18:02.783 17:05:39 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:02.783 17:05:39 thread -- common/autotest_common.sh@10 -- # set +x 00:18:02.783 17:05:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:18:02.783 17:05:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:02.783 17:05:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:02.783 17:05:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:02.783 17:05:39 -- common/autotest_common.sh@10 -- # set +x 00:18:02.783 ************************************ 00:18:02.783 START TEST app_cmdline 00:18:02.783 ************************************ 00:18:02.783 17:05:39 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:02.783 * Looking for test storage... 00:18:02.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:02.784 17:05:39 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:02.784 17:05:39 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:18:02.784 17:05:39 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.042 17:05:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.042 --rc genhtml_branch_coverage=1 00:18:03.042 --rc genhtml_function_coverage=1 00:18:03.042 --rc genhtml_legend=1 00:18:03.042 --rc geninfo_all_blocks=1 00:18:03.042 --rc geninfo_unexecuted_blocks=1 00:18:03.042 00:18:03.042 ' 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.042 --rc genhtml_branch_coverage=1 00:18:03.042 --rc genhtml_function_coverage=1 00:18:03.042 --rc genhtml_legend=1 00:18:03.042 --rc geninfo_all_blocks=1 00:18:03.042 --rc geninfo_unexecuted_blocks=1 00:18:03.042 00:18:03.042 ' 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.042 --rc genhtml_branch_coverage=1 00:18:03.042 --rc genhtml_function_coverage=1 00:18:03.042 --rc genhtml_legend=1 00:18:03.042 --rc geninfo_all_blocks=1 00:18:03.042 --rc geninfo_unexecuted_blocks=1 00:18:03.042 00:18:03.042 ' 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:03.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.042 --rc genhtml_branch_coverage=1 00:18:03.042 --rc genhtml_function_coverage=1 00:18:03.042 --rc genhtml_legend=1 00:18:03.042 --rc geninfo_all_blocks=1 00:18:03.042 --rc geninfo_unexecuted_blocks=1 00:18:03.042 00:18:03.042 ' 00:18:03.042 17:05:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:03.042 17:05:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58928 00:18:03.042 17:05:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58928 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 58928 ']' 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.042 17:05:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:03.042 17:05:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:03.042 [2024-11-08 17:05:39.609707] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:03.042 [2024-11-08 17:05:39.609881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:18:03.316 [2024-11-08 17:05:39.772849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.316 [2024-11-08 17:05:39.891778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.881 17:05:40 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:03.881 17:05:40 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:18:03.881 17:05:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:04.139 { 00:18:04.139 "version": "SPDK v25.01-pre git sha1 5b0ad6d60", 00:18:04.139 "fields": { 00:18:04.139 "major": 25, 00:18:04.139 "minor": 1, 00:18:04.139 "patch": 0, 00:18:04.139 "suffix": "-pre", 00:18:04.139 "commit": "5b0ad6d60" 00:18:04.139 } 00:18:04.139 } 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:04.139 17:05:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:04.139 17:05:40 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:04.397 request: 00:18:04.397 { 00:18:04.397 "method": "env_dpdk_get_mem_stats", 00:18:04.397 "req_id": 1 00:18:04.397 } 00:18:04.397 Got JSON-RPC error response 00:18:04.397 response: 00:18:04.397 { 00:18:04.397 "code": -32601, 00:18:04.397 "message": "Method not found" 00:18:04.397 } 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.397 17:05:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58928 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 58928 ']' 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 58928 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58928 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:04.397 killing process with pid 58928 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58928' 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@971 -- # kill 58928 00:18:04.397 17:05:41 app_cmdline -- common/autotest_common.sh@976 -- # wait 58928 00:18:06.307 00:18:06.307 real 0m3.260s 00:18:06.307 user 0m3.504s 00:18:06.307 sys 0m0.522s 00:18:06.307 ************************************ 00:18:06.307 END TEST app_cmdline 00:18:06.307 ************************************ 00:18:06.307 17:05:42 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:06.307 17:05:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 17:05:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:06.307 17:05:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:06.307 17:05:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:06.307 17:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 ************************************ 00:18:06.307 START TEST version 00:18:06.307 ************************************ 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:06.307 * Looking for test storage... 00:18:06.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1691 -- # lcov --version 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:06.307 17:05:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.307 17:05:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.307 17:05:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.307 17:05:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.307 17:05:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.307 17:05:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.307 17:05:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.307 17:05:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.307 17:05:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.307 17:05:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.307 17:05:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.307 17:05:42 version -- scripts/common.sh@344 -- # case "$op" in 00:18:06.307 17:05:42 version -- scripts/common.sh@345 -- # : 1 00:18:06.307 17:05:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.307 17:05:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.307 17:05:42 version -- scripts/common.sh@365 -- # decimal 1 00:18:06.307 17:05:42 version -- scripts/common.sh@353 -- # local d=1 00:18:06.307 17:05:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.307 17:05:42 version -- scripts/common.sh@355 -- # echo 1 00:18:06.307 17:05:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.307 17:05:42 version -- scripts/common.sh@366 -- # decimal 2 00:18:06.307 17:05:42 version -- scripts/common.sh@353 -- # local d=2 00:18:06.307 17:05:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.307 17:05:42 version -- scripts/common.sh@355 -- # echo 2 00:18:06.307 17:05:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.307 17:05:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.307 17:05:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.307 17:05:42 version -- scripts/common.sh@368 -- # return 0 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:06.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.307 --rc genhtml_branch_coverage=1 00:18:06.307 --rc genhtml_function_coverage=1 00:18:06.307 --rc genhtml_legend=1 00:18:06.307 --rc geninfo_all_blocks=1 00:18:06.307 --rc geninfo_unexecuted_blocks=1 00:18:06.307 00:18:06.307 ' 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:06.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.307 --rc genhtml_branch_coverage=1 00:18:06.307 --rc genhtml_function_coverage=1 00:18:06.307 --rc genhtml_legend=1 00:18:06.307 --rc geninfo_all_blocks=1 00:18:06.307 --rc geninfo_unexecuted_blocks=1 00:18:06.307 00:18:06.307 ' 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:06.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.307 --rc genhtml_branch_coverage=1 00:18:06.307 --rc genhtml_function_coverage=1 00:18:06.307 --rc genhtml_legend=1 00:18:06.307 --rc geninfo_all_blocks=1 00:18:06.307 --rc geninfo_unexecuted_blocks=1 00:18:06.307 00:18:06.307 ' 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:06.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.307 --rc genhtml_branch_coverage=1 00:18:06.307 --rc genhtml_function_coverage=1 00:18:06.307 --rc genhtml_legend=1 00:18:06.307 --rc geninfo_all_blocks=1 00:18:06.307 --rc geninfo_unexecuted_blocks=1 00:18:06.307 00:18:06.307 ' 00:18:06.307 17:05:42 version -- app/version.sh@17 -- # get_header_version major 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # tr -d '"' 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # cut -f2 00:18:06.307 17:05:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:06.307 17:05:42 version -- app/version.sh@17 -- # major=25 00:18:06.307 17:05:42 version -- app/version.sh@18 -- # get_header_version minor 00:18:06.307 17:05:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # cut -f2 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # tr -d '"' 00:18:06.307 17:05:42 version -- app/version.sh@18 -- # minor=1 00:18:06.307 17:05:42 version -- app/version.sh@19 -- # get_header_version patch 00:18:06.307 17:05:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # cut -f2 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # tr -d '"' 00:18:06.307 17:05:42 version -- app/version.sh@19 -- # patch=0 00:18:06.307 17:05:42 version -- app/version.sh@20 -- # get_header_version suffix 00:18:06.307 17:05:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # cut -f2 00:18:06.307 17:05:42 version -- app/version.sh@14 -- # tr -d '"' 00:18:06.307 17:05:42 version -- app/version.sh@20 -- # suffix=-pre 00:18:06.307 17:05:42 version -- app/version.sh@22 -- # version=25.1 00:18:06.307 17:05:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:06.307 17:05:42 version -- app/version.sh@28 -- # version=25.1rc0 00:18:06.307 17:05:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:06.307 17:05:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:06.307 17:05:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:18:06.307 17:05:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:18:06.307 00:18:06.307 real 0m0.213s 00:18:06.307 user 0m0.131s 00:18:06.307 sys 0m0.106s 00:18:06.307 ************************************ 00:18:06.307 END TEST version 00:18:06.307 ************************************ 00:18:06.307 17:05:42 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:06.307 17:05:42 version -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 17:05:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:18:06.307 17:05:42 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:18:06.308 17:05:42 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:06.308 17:05:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:06.308 17:05:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:06.308 17:05:42 -- common/autotest_common.sh@10 -- # set +x 00:18:06.308 ************************************ 00:18:06.308 START TEST bdev_raid 00:18:06.308 ************************************ 00:18:06.308 17:05:42 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:06.565 * Looking for test storage... 00:18:06.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@345 -- # : 1 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.565 17:05:43 bdev_raid -- scripts/common.sh@368 -- # return 0 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:06.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.565 --rc genhtml_branch_coverage=1 00:18:06.565 --rc genhtml_function_coverage=1 00:18:06.565 --rc genhtml_legend=1 00:18:06.565 --rc geninfo_all_blocks=1 00:18:06.565 --rc geninfo_unexecuted_blocks=1 00:18:06.565 00:18:06.565 ' 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:06.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.565 --rc genhtml_branch_coverage=1 00:18:06.565 --rc genhtml_function_coverage=1 00:18:06.565 --rc genhtml_legend=1 00:18:06.565 --rc geninfo_all_blocks=1 00:18:06.565 --rc geninfo_unexecuted_blocks=1 00:18:06.565 00:18:06.565 ' 00:18:06.565 17:05:43 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:06.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.565 --rc genhtml_branch_coverage=1 00:18:06.565 --rc genhtml_function_coverage=1 00:18:06.565 --rc genhtml_legend=1 00:18:06.565 --rc geninfo_all_blocks=1 00:18:06.565 --rc geninfo_unexecuted_blocks=1 00:18:06.566 00:18:06.566 ' 00:18:06.566 17:05:43 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:06.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.566 --rc genhtml_branch_coverage=1 00:18:06.566 --rc genhtml_function_coverage=1 00:18:06.566 --rc genhtml_legend=1 00:18:06.566 --rc geninfo_all_blocks=1 00:18:06.566 --rc geninfo_unexecuted_blocks=1 00:18:06.566 00:18:06.566 ' 00:18:06.566 17:05:43 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:06.566 17:05:43 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:18:06.566 17:05:43 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:18:06.566 17:05:43 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:18:06.566 17:05:43 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:18:06.566 17:05:43 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:18:06.566 17:05:43 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:18:06.566 17:05:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:06.566 17:05:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:06.566 17:05:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 ************************************ 00:18:06.566 START TEST raid1_resize_data_offset_test 00:18:06.566 ************************************ 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59110 00:18:06.566 Process raid pid: 59110 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59110' 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59110 00:18:06.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 59110 ']' 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:06.566 17:05:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 [2024-11-08 17:05:43.208900] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:06.566 [2024-11-08 17:05:43.209043] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.823 [2024-11-08 17:05:43.373157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.823 [2024-11-08 17:05:43.499254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.080 [2024-11-08 17:05:43.662475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.080 [2024-11-08 17:05:43.662534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.645 malloc0 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.645 malloc1 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.645 null0 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.645 [2024-11-08 17:05:44.192281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:18:07.645 [2024-11-08 17:05:44.194276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:07.645 [2024-11-08 17:05:44.194331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:18:07.645 [2024-11-08 17:05:44.194474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:07.645 [2024-11-08 17:05:44.194489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:18:07.645 [2024-11-08 17:05:44.194803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:07.645 [2024-11-08 17:05:44.194969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:07.645 [2024-11-08 17:05:44.194981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:07.645 [2024-11-08 17:05:44.195133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.645 [2024-11-08 17:05:44.236306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.645 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 malloc2 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 [2024-11-08 17:05:44.631939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:08.211 [2024-11-08 17:05:44.644515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.211 [2024-11-08 17:05:44.646554] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59110 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 59110 ']' 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 59110 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:08.211 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59110 00:18:08.212 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:08.212 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:08.212 killing process with pid 59110 00:18:08.212 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59110' 00:18:08.212 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 59110 00:18:08.212 17:05:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 59110 00:18:08.212 [2024-11-08 17:05:44.706929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.212 [2024-11-08 17:05:44.708748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:18:08.212 [2024-11-08 17:05:44.708824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.212 [2024-11-08 17:05:44.708841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:18:08.212 [2024-11-08 17:05:44.733290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.212 [2024-11-08 17:05:44.733617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.212 [2024-11-08 17:05:44.733632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:09.626 [2024-11-08 17:05:45.938782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.227 17:05:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:18:10.227 00:18:10.227 real 0m3.574s 00:18:10.227 user 0m3.484s 00:18:10.227 sys 0m0.470s 00:18:10.227 ************************************ 00:18:10.227 17:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:10.227 17:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.227 END TEST raid1_resize_data_offset_test 00:18:10.227 ************************************ 00:18:10.227 17:05:46 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:18:10.228 17:05:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:10.228 17:05:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:10.228 17:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:10.228 ************************************ 00:18:10.228 START TEST raid0_resize_superblock_test 00:18:10.228 ************************************ 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59179 00:18:10.228 Process raid pid: 59179 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59179' 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59179 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59179 ']' 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:10.228 17:05:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.228 [2024-11-08 17:05:46.848240] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:10.228 [2024-11-08 17:05:46.848385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.485 [2024-11-08 17:05:47.004581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.485 [2024-11-08 17:05:47.123250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.742 [2024-11-08 17:05:47.272675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.742 [2024-11-08 17:05:47.272716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.306 17:05:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:11.306 17:05:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:11.306 17:05:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:18:11.306 17:05:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.306 17:05:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 malloc0 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 [2024-11-08 17:05:48.137075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:11.565 [2024-11-08 17:05:48.137142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.565 [2024-11-08 17:05:48.137169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:11.565 [2024-11-08 17:05:48.137182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.565 [2024-11-08 17:05:48.139524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.565 [2024-11-08 17:05:48.139564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:11.565 pt0 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 030c39c0-5d7f-4454-8338-b26cb4016970 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 f041a64c-4d7d-4ce8-bd94-5b9235c2a0a4 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 c773259f-4b10-4aec-9b9d-554a625a851e 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 [2024-11-08 17:05:48.250708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f041a64c-4d7d-4ce8-bd94-5b9235c2a0a4 is claimed 00:18:11.565 [2024-11-08 17:05:48.250830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c773259f-4b10-4aec-9b9d-554a625a851e is claimed 00:18:11.565 [2024-11-08 17:05:48.250972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:11.565 [2024-11-08 17:05:48.250989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:18:11.565 [2024-11-08 17:05:48.251279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:11.565 [2024-11-08 17:05:48.251459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:11.565 [2024-11-08 17:05:48.251468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:11.565 [2024-11-08 17:05:48.251632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 [2024-11-08 17:05:48.330986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 [2024-11-08 17:05:48.362999] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:11.824 [2024-11-08 17:05:48.363030] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f041a64c-4d7d-4ce8-bd94-5b9235c2a0a4' was resized: old size 131072, new size 204800 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 [2024-11-08 17:05:48.370864] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:11.824 [2024-11-08 17:05:48.370891] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c773259f-4b10-4aec-9b9d-554a625a851e' was resized: old size 131072, new size 204800 00:18:11.824 [2024-11-08 17:05:48.370915] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:18:11.824 [2024-11-08 17:05:48.455031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 [2024-11-08 17:05:48.482782] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:18:11.824 [2024-11-08 17:05:48.482863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:18:11.824 [2024-11-08 17:05:48.482876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.824 [2024-11-08 17:05:48.482896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:18:11.824 [2024-11-08 17:05:48.483018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.824 [2024-11-08 17:05:48.483058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.824 [2024-11-08 17:05:48.483071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 [2024-11-08 17:05:48.490684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:11.824 [2024-11-08 17:05:48.490743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.824 [2024-11-08 17:05:48.490784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:11.824 [2024-11-08 17:05:48.490797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.824 [2024-11-08 17:05:48.493135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.824 [2024-11-08 17:05:48.493174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:11.824 [2024-11-08 17:05:48.494856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f041a64c-4d7d-4ce8-bd94-5b9235c2a0a4 00:18:11.824 [2024-11-08 17:05:48.494916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f041a64c-4d7d-4ce8-bd94-5b9235c2a0a4 is claimed 00:18:11.824 [2024-11-08 17:05:48.495021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c773259f-4b10-4aec-9b9d-554a625a851e 00:18:11.824 [2024-11-08 17:05:48.495039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c773259f-4b10-4aec-9b9d-554a625a851e is claimed 00:18:11.824 [2024-11-08 17:05:48.495190] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c773259f-4b10-4aec-9b9d-554a625a851e (2) smaller than existing raid bdev Raid (3) 00:18:11.824 [2024-11-08 17:05:48.495213] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f041a64c-4d7d-4ce8-bd94-5b9235c2a0a4: File exists 00:18:11.824 [2024-11-08 17:05:48.495252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:11.824 [2024-11-08 17:05:48.495264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:18:11.824 [2024-11-08 17:05:48.495509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:11.824 pt0 00:18:11.824 [2024-11-08 17:05:48.495650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:11.824 [2024-11-08 17:05:48.495666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:18:11.824 [2024-11-08 17:05:48.495826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:11.824 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:11.825 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:18:11.825 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.825 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.825 [2024-11-08 17:05:48.511204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.825 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59179 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59179 ']' 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59179 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59179 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:12.082 killing process with pid 59179 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59179' 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59179 00:18:12.082 17:05:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59179 00:18:12.082 [2024-11-08 17:05:48.565253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.082 [2024-11-08 17:05:48.565351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.082 [2024-11-08 17:05:48.565406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.082 [2024-11-08 17:05:48.565416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:18:13.014 [2024-11-08 17:05:49.493466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.579 17:05:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:18:13.579 00:18:13.579 real 0m3.477s 00:18:13.579 user 0m3.644s 00:18:13.579 sys 0m0.487s 00:18:13.579 17:05:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:13.579 17:05:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.579 ************************************ 00:18:13.579 END TEST raid0_resize_superblock_test 00:18:13.579 ************************************ 00:18:13.837 17:05:50 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:18:13.837 17:05:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:13.837 17:05:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:13.837 17:05:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 ************************************ 00:18:13.837 START TEST raid1_resize_superblock_test 00:18:13.837 ************************************ 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:18:13.837 Process raid pid: 59267 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59267 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59267' 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59267 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59267 ']' 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.837 17:05:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 [2024-11-08 17:05:50.422150] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:13.837 [2024-11-08 17:05:50.422374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.094 [2024-11-08 17:05:50.604356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.094 [2024-11-08 17:05:50.724840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.351 [2024-11-08 17:05:50.875364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.351 [2024-11-08 17:05:50.875439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.609 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:14.609 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:14.609 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:18:14.609 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.609 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 malloc0 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 [2024-11-08 17:05:51.656063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:15.188 [2024-11-08 17:05:51.656144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.188 [2024-11-08 17:05:51.656178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:15.188 [2024-11-08 17:05:51.656196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.188 [2024-11-08 17:05:51.658690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.188 [2024-11-08 17:05:51.658737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:15.188 pt0 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 251456b8-45a1-4a4b-8a53-82b3e95899e3 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 df46f146-8051-4346-816b-06cae5c9a2a0 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 1b99d606-88e1-4d32-b93d-5826ab2ccea9 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 [2024-11-08 17:05:51.760437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev df46f146-8051-4346-816b-06cae5c9a2a0 is claimed 00:18:15.188 [2024-11-08 17:05:51.760545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1b99d606-88e1-4d32-b93d-5826ab2ccea9 is claimed 00:18:15.188 [2024-11-08 17:05:51.760695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:15.188 [2024-11-08 17:05:51.760710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:18:15.188 [2024-11-08 17:05:51.760999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:15.188 [2024-11-08 17:05:51.761190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:15.188 [2024-11-08 17:05:51.761200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:15.188 [2024-11-08 17:05:51.761360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:18:15.188 [2024-11-08 17:05:51.840729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:15.188 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 [2024-11-08 17:05:51.872681] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:15.189 [2024-11-08 17:05:51.872717] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'df46f146-8051-4346-816b-06cae5c9a2a0' was resized: old size 131072, new size 204800 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 [2024-11-08 17:05:51.880574] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:15.189 [2024-11-08 17:05:51.880603] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1b99d606-88e1-4d32-b93d-5826ab2ccea9' was resized: old size 131072, new size 204800 00:18:15.189 [2024-11-08 17:05:51.880634] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.189 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:18:15.445 [2024-11-08 17:05:51.960774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.445 17:05:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.445 [2024-11-08 17:05:51.996517] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:18:15.446 [2024-11-08 17:05:51.996618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:18:15.446 [2024-11-08 17:05:51.996656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:18:15.446 [2024-11-08 17:05:51.996893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.446 [2024-11-08 17:05:51.997114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.446 [2024-11-08 17:05:51.997192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.446 [2024-11-08 17:05:51.997205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 [2024-11-08 17:05:52.008417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:15.446 [2024-11-08 17:05:52.008484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.446 [2024-11-08 17:05:52.008511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:15.446 [2024-11-08 17:05:52.008531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.446 [2024-11-08 17:05:52.010949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.446 [2024-11-08 17:05:52.010993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:15.446 [2024-11-08 17:05:52.012701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev df46f146-8051-4346-816b-06cae5c9a2a0 00:18:15.446 [2024-11-08 17:05:52.012795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev df46f146-8051-4346-816b-06cae5c9a2a0 is claimed 00:18:15.446 [2024-11-08 17:05:52.012904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1b99d606-88e1-4d32-b93d-5826ab2ccea9 00:18:15.446 [2024-11-08 17:05:52.012922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1b99d606-88e1-4d32-b93d-5826ab2ccea9 is claimed 00:18:15.446 [2024-11-08 17:05:52.013076] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 1b99d606-88e1-4d32-b93d-5826ab2ccea9 (2) smaller than existing raid bdev Raid (3) 00:18:15.446 [2024-11-08 17:05:52.013096] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev df46f146-8051-4346-816b-06cae5c9a2a0: File exists 00:18:15.446 [2024-11-08 17:05:52.013140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:15.446 [2024-11-08 17:05:52.013157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:15.446 [2024-11-08 17:05:52.013426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:15.446 [2024-11-08 17:05:52.013587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:15.446 [2024-11-08 17:05:52.013596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:18:15.446 pt0 00:18:15.446 [2024-11-08 17:05:52.013745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:18:15.446 [2024-11-08 17:05:52.029032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59267 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59267 ']' 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59267 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59267 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:15.446 killing process with pid 59267 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59267' 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59267 00:18:15.446 [2024-11-08 17:05:52.077490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.446 [2024-11-08 17:05:52.077575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.446 [2024-11-08 17:05:52.077635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.446 [2024-11-08 17:05:52.077645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:18:15.446 17:05:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59267 00:18:16.378 [2024-11-08 17:05:53.026961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.310 17:05:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:18:17.310 00:18:17.310 real 0m3.473s 00:18:17.310 user 0m3.569s 00:18:17.310 sys 0m0.507s 00:18:17.310 17:05:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:17.310 ************************************ 00:18:17.310 17:05:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.310 END TEST raid1_resize_superblock_test 00:18:17.310 ************************************ 00:18:17.310 17:05:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:18:17.310 17:05:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:18:17.310 17:05:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:18:17.310 17:05:53 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:18:17.310 17:05:53 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:18:17.310 17:05:53 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:18:17.310 17:05:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:17.310 17:05:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:17.310 17:05:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.310 ************************************ 00:18:17.310 START TEST raid_function_test_raid0 00:18:17.310 ************************************ 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:18:17.310 Process raid pid: 59358 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59358 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59358' 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59358 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 59358 ']' 00:18:17.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.310 17:05:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:17.310 [2024-11-08 17:05:53.960590] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:17.310 [2024-11-08 17:05:53.960738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.567 [2024-11-08 17:05:54.121568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.567 [2024-11-08 17:05:54.242217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.823 [2024-11-08 17:05:54.395147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.823 [2024-11-08 17:05:54.395206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:18.393 Base_1 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:18.393 Base_2 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:18.393 [2024-11-08 17:05:54.942802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:18.393 [2024-11-08 17:05:54.944795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:18.393 [2024-11-08 17:05:54.944877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:18.393 [2024-11-08 17:05:54.944890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:18.393 [2024-11-08 17:05:54.945189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:18.393 [2024-11-08 17:05:54.945329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:18.393 [2024-11-08 17:05:54.945347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:18:18.393 [2024-11-08 17:05:54.945501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.393 17:05:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:18:18.651 [2024-11-08 17:05:55.174875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:18.651 /dev/nbd0 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.651 1+0 records in 00:18:18.651 1+0 records out 00:18:18.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305864 s, 13.4 MB/s 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.651 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:18.909 { 00:18:18.909 "nbd_device": "/dev/nbd0", 00:18:18.909 "bdev_name": "raid" 00:18:18.909 } 00:18:18.909 ]' 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:18.909 { 00:18:18.909 "nbd_device": "/dev/nbd0", 00:18:18.909 "bdev_name": "raid" 00:18:18.909 } 00:18:18.909 ]' 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:18:18.909 4096+0 records in 00:18:18.909 4096+0 records out 00:18:18.909 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0232688 s, 90.1 MB/s 00:18:18.909 17:05:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:18:20.279 4096+0 records in 00:18:20.279 4096+0 records out 00:18:20.279 2097152 bytes (2.1 MB, 2.0 MiB) copied, 1.37559 s, 1.5 MB/s 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:18:20.279 128+0 records in 00:18:20.279 128+0 records out 00:18:20.279 65536 bytes (66 kB, 64 KiB) copied, 0.00114186 s, 57.4 MB/s 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:18:20.279 2035+0 records in 00:18:20.279 2035+0 records out 00:18:20.279 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0081741 s, 127 MB/s 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:18:20.279 456+0 records in 00:18:20.279 456+0 records out 00:18:20.279 233472 bytes (233 kB, 228 KiB) copied, 0.0016938 s, 138 MB/s 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.279 17:05:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.536 [2024-11-08 17:05:57.189258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.536 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:20.794 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:20.794 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59358 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 59358 ']' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 59358 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59358 00:18:20.795 killing process with pid 59358 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59358' 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 59358 00:18:20.795 [2024-11-08 17:05:57.473527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.795 17:05:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 59358 00:18:20.795 [2024-11-08 17:05:57.473633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.795 [2024-11-08 17:05:57.473685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.795 [2024-11-08 17:05:57.473701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:18:21.052 [2024-11-08 17:05:57.617842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.009 17:05:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:18:22.009 00:18:22.009 real 0m4.503s 00:18:22.009 user 0m4.884s 00:18:22.009 sys 0m1.171s 00:18:22.009 17:05:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:22.009 ************************************ 00:18:22.009 END TEST raid_function_test_raid0 00:18:22.009 ************************************ 00:18:22.009 17:05:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:22.009 17:05:58 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:18:22.009 17:05:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:22.009 17:05:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:22.009 17:05:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.009 ************************************ 00:18:22.009 START TEST raid_function_test_concat 00:18:22.009 ************************************ 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:18:22.009 Process raid pid: 59497 00:18:22.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59497 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59497' 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59497 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 59497 ']' 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:22.009 17:05:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:22.009 [2024-11-08 17:05:58.527772] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:22.009 [2024-11-08 17:05:58.527919] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.009 [2024-11-08 17:05:58.693287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.266 [2024-11-08 17:05:58.813169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.266 [2024-11-08 17:05:58.964480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.266 [2024-11-08 17:05:58.964740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:22.832 Base_1 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:22.832 Base_2 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:22.832 [2024-11-08 17:05:59.465538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:22.832 [2024-11-08 17:05:59.467802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:22.832 [2024-11-08 17:05:59.467898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:22.832 [2024-11-08 17:05:59.467912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:22.832 [2024-11-08 17:05:59.468227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:22.832 [2024-11-08 17:05:59.468379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:22.832 [2024-11-08 17:05:59.468388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:18:22.832 [2024-11-08 17:05:59.468564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:22.832 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:18:23.091 [2024-11-08 17:05:59.693609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:23.091 /dev/nbd0 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.091 1+0 records in 00:18:23.091 1+0 records out 00:18:23.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728675 s, 5.6 MB/s 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:23.091 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:23.364 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:23.364 { 00:18:23.364 "nbd_device": "/dev/nbd0", 00:18:23.364 "bdev_name": "raid" 00:18:23.364 } 00:18:23.364 ]' 00:18:23.364 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:23.364 { 00:18:23.364 "nbd_device": "/dev/nbd0", 00:18:23.364 "bdev_name": "raid" 00:18:23.364 } 00:18:23.364 ]' 00:18:23.364 17:05:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:23.364 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:23.364 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:23.364 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:23.364 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:18:23.364 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:18:23.364 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:18:23.364 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:18:23.365 4096+0 records in 00:18:23.365 4096+0 records out 00:18:23.365 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0244621 s, 85.7 MB/s 00:18:23.365 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:18:24.328 4096+0 records in 00:18:24.328 4096+0 records out 00:18:24.328 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.731066 s, 2.9 MB/s 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:18:24.328 128+0 records in 00:18:24.328 128+0 records out 00:18:24.328 65536 bytes (66 kB, 64 KiB) copied, 0.000464019 s, 141 MB/s 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:18:24.328 2035+0 records in 00:18:24.328 2035+0 records out 00:18:24.328 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00724372 s, 144 MB/s 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:18:24.328 456+0 records in 00:18:24.328 456+0 records out 00:18:24.328 233472 bytes (233 kB, 228 KiB) copied, 0.00246529 s, 94.7 MB/s 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.328 17:06:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:24.586 [2024-11-08 17:06:01.108575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.586 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59497 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 59497 ']' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 59497 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59497 00:18:24.888 killing process with pid 59497 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59497' 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 59497 00:18:24.888 [2024-11-08 17:06:01.410539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.888 17:06:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 59497 00:18:24.888 [2024-11-08 17:06:01.410651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.888 [2024-11-08 17:06:01.410708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.888 [2024-11-08 17:06:01.410721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:18:24.888 [2024-11-08 17:06:01.571904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.821 ************************************ 00:18:25.821 END TEST raid_function_test_concat 00:18:25.821 ************************************ 00:18:25.821 17:06:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:18:25.821 00:18:25.821 real 0m3.917s 00:18:25.821 user 0m4.435s 00:18:25.821 sys 0m0.984s 00:18:25.821 17:06:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:25.821 17:06:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:18:25.821 17:06:02 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:18:25.821 17:06:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:25.821 17:06:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:25.821 17:06:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.821 ************************************ 00:18:25.821 START TEST raid0_resize_test 00:18:25.821 ************************************ 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:18:25.821 Process raid pid: 59620 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59620 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59620' 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59620 00:18:25.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59620 ']' 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:25.821 17:06:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.821 [2024-11-08 17:06:02.512769] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:25.821 [2024-11-08 17:06:02.513213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.077 [2024-11-08 17:06:02.691953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.372 [2024-11-08 17:06:02.851322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.372 [2024-11-08 17:06:03.005043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.372 [2024-11-08 17:06:03.005272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.945 Base_1 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.945 Base_2 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.945 [2024-11-08 17:06:03.444699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:26.945 [2024-11-08 17:06:03.446746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:26.945 [2024-11-08 17:06:03.446841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:26.945 [2024-11-08 17:06:03.446854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:26.945 [2024-11-08 17:06:03.447165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:26.945 [2024-11-08 17:06:03.447294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:26.945 [2024-11-08 17:06:03.447303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:26.945 [2024-11-08 17:06:03.447470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.945 [2024-11-08 17:06:03.456684] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:26.945 [2024-11-08 17:06:03.456855] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:18:26.945 true 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.945 [2024-11-08 17:06:03.468875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.945 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.945 [2024-11-08 17:06:03.504649] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:26.945 [2024-11-08 17:06:03.504681] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:18:26.945 [2024-11-08 17:06:03.504719] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:18:26.945 true 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:18:26.946 [2024-11-08 17:06:03.516877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59620 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59620 ']' 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 59620 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59620 00:18:26.946 killing process with pid 59620 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59620' 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 59620 00:18:26.946 [2024-11-08 17:06:03.573636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.946 17:06:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 59620 00:18:26.946 [2024-11-08 17:06:03.573748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.946 [2024-11-08 17:06:03.573837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.946 [2024-11-08 17:06:03.573849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:26.946 [2024-11-08 17:06:03.585876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.887 ************************************ 00:18:27.887 END TEST raid0_resize_test 00:18:27.887 ************************************ 00:18:27.887 17:06:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:18:27.887 00:18:27.887 real 0m1.914s 00:18:27.887 user 0m2.065s 00:18:27.887 sys 0m0.313s 00:18:27.887 17:06:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:27.887 17:06:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.887 17:06:04 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:18:27.887 17:06:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:27.887 17:06:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:27.887 17:06:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.887 ************************************ 00:18:27.887 START TEST raid1_resize_test 00:18:27.887 ************************************ 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59676 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59676' 00:18:27.887 Process raid pid: 59676 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59676 00:18:27.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 59676 ']' 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:27.887 17:06:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.887 [2024-11-08 17:06:04.495603] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:27.887 [2024-11-08 17:06:04.495964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.145 [2024-11-08 17:06:04.659646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.145 [2024-11-08 17:06:04.781093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.403 [2024-11-08 17:06:04.931439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.403 [2024-11-08 17:06:04.931671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.660 Base_1 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.660 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.918 Base_2 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.918 [2024-11-08 17:06:05.380785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:28.918 [2024-11-08 17:06:05.382852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:28.918 [2024-11-08 17:06:05.383051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.918 [2024-11-08 17:06:05.383072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:28.918 [2024-11-08 17:06:05.383386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:28.918 [2024-11-08 17:06:05.383520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.918 [2024-11-08 17:06:05.383528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:28.918 [2024-11-08 17:06:05.383705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.918 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.918 [2024-11-08 17:06:05.388743] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:28.918 [2024-11-08 17:06:05.388786] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:18:28.918 true 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:18:28.919 [2024-11-08 17:06:05.400957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.919 [2024-11-08 17:06:05.432765] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:28.919 [2024-11-08 17:06:05.432920] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:18:28.919 [2024-11-08 17:06:05.432964] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:18:28.919 true 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.919 [2024-11-08 17:06:05.444973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59676 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 59676 ']' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 59676 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59676 00:18:28.919 killing process with pid 59676 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59676' 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 59676 00:18:28.919 [2024-11-08 17:06:05.497653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.919 17:06:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 59676 00:18:28.919 [2024-11-08 17:06:05.497763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.919 [2024-11-08 17:06:05.498261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.919 [2024-11-08 17:06:05.498283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:28.919 [2024-11-08 17:06:05.509705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.867 ************************************ 00:18:29.867 END TEST raid1_resize_test 00:18:29.867 ************************************ 00:18:29.867 17:06:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:18:29.867 00:18:29.867 real 0m1.829s 00:18:29.867 user 0m1.957s 00:18:29.867 sys 0m0.290s 00:18:29.867 17:06:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:29.867 17:06:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.867 17:06:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:18:29.867 17:06:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:29.867 17:06:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:18:29.867 17:06:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:29.867 17:06:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:29.867 17:06:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.867 ************************************ 00:18:29.867 START TEST raid_state_function_test 00:18:29.867 ************************************ 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:29.867 Process raid pid: 59733 00:18:29.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59733 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59733' 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59733 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 59733 ']' 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:29.867 17:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.867 [2024-11-08 17:06:06.401422] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:29.867 [2024-11-08 17:06:06.401741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.867 [2024-11-08 17:06:06.561937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.125 [2024-11-08 17:06:06.682857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.125 [2024-11-08 17:06:06.834013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.125 [2024-11-08 17:06:06.834239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.702 [2024-11-08 17:06:07.251241] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.702 [2024-11-08 17:06:07.251297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.702 [2024-11-08 17:06:07.251307] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.702 [2024-11-08 17:06:07.251317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.702 "name": "Existed_Raid", 00:18:30.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.702 "strip_size_kb": 64, 00:18:30.702 "state": "configuring", 00:18:30.702 "raid_level": "raid0", 00:18:30.702 "superblock": false, 00:18:30.702 "num_base_bdevs": 2, 00:18:30.702 "num_base_bdevs_discovered": 0, 00:18:30.702 "num_base_bdevs_operational": 2, 00:18:30.702 "base_bdevs_list": [ 00:18:30.702 { 00:18:30.702 "name": "BaseBdev1", 00:18:30.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.702 "is_configured": false, 00:18:30.702 "data_offset": 0, 00:18:30.702 "data_size": 0 00:18:30.702 }, 00:18:30.702 { 00:18:30.702 "name": "BaseBdev2", 00:18:30.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.702 "is_configured": false, 00:18:30.702 "data_offset": 0, 00:18:30.702 "data_size": 0 00:18:30.702 } 00:18:30.702 ] 00:18:30.702 }' 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.702 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.960 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:30.960 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 [2024-11-08 17:06:07.579272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.961 [2024-11-08 17:06:07.579309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 [2024-11-08 17:06:07.587271] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.961 [2024-11-08 17:06:07.587311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.961 [2024-11-08 17:06:07.587321] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.961 [2024-11-08 17:06:07.587332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 [2024-11-08 17:06:07.622418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.961 BaseBdev1 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 [ 00:18:30.961 { 00:18:30.961 "name": "BaseBdev1", 00:18:30.961 "aliases": [ 00:18:30.961 "c7c38149-a740-405b-92ae-7be516e79e8c" 00:18:30.961 ], 00:18:30.961 "product_name": "Malloc disk", 00:18:30.961 "block_size": 512, 00:18:30.961 "num_blocks": 65536, 00:18:30.961 "uuid": "c7c38149-a740-405b-92ae-7be516e79e8c", 00:18:30.961 "assigned_rate_limits": { 00:18:30.961 "rw_ios_per_sec": 0, 00:18:30.961 "rw_mbytes_per_sec": 0, 00:18:30.961 "r_mbytes_per_sec": 0, 00:18:30.961 "w_mbytes_per_sec": 0 00:18:30.961 }, 00:18:30.961 "claimed": true, 00:18:30.961 "claim_type": "exclusive_write", 00:18:30.961 "zoned": false, 00:18:30.961 "supported_io_types": { 00:18:30.961 "read": true, 00:18:30.961 "write": true, 00:18:30.961 "unmap": true, 00:18:30.961 "flush": true, 00:18:30.961 "reset": true, 00:18:30.961 "nvme_admin": false, 00:18:30.961 "nvme_io": false, 00:18:30.961 "nvme_io_md": false, 00:18:30.961 "write_zeroes": true, 00:18:30.961 "zcopy": true, 00:18:30.961 "get_zone_info": false, 00:18:30.961 "zone_management": false, 00:18:30.961 "zone_append": false, 00:18:30.961 "compare": false, 00:18:30.961 "compare_and_write": false, 00:18:30.961 "abort": true, 00:18:30.961 "seek_hole": false, 00:18:30.961 "seek_data": false, 00:18:30.961 "copy": true, 00:18:30.961 "nvme_iov_md": false 00:18:30.961 }, 00:18:30.961 "memory_domains": [ 00:18:30.961 { 00:18:30.961 "dma_device_id": "system", 00:18:30.961 "dma_device_type": 1 00:18:30.961 }, 00:18:30.961 { 00:18:30.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.961 "dma_device_type": 2 00:18:30.961 } 00:18:30.961 ], 00:18:30.961 "driver_specific": {} 00:18:30.961 } 00:18:30.961 ] 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.218 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.218 "name": "Existed_Raid", 00:18:31.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.218 "strip_size_kb": 64, 00:18:31.218 "state": "configuring", 00:18:31.218 "raid_level": "raid0", 00:18:31.218 "superblock": false, 00:18:31.218 "num_base_bdevs": 2, 00:18:31.218 "num_base_bdevs_discovered": 1, 00:18:31.218 "num_base_bdevs_operational": 2, 00:18:31.218 "base_bdevs_list": [ 00:18:31.218 { 00:18:31.218 "name": "BaseBdev1", 00:18:31.218 "uuid": "c7c38149-a740-405b-92ae-7be516e79e8c", 00:18:31.218 "is_configured": true, 00:18:31.218 "data_offset": 0, 00:18:31.218 "data_size": 65536 00:18:31.218 }, 00:18:31.218 { 00:18:31.218 "name": "BaseBdev2", 00:18:31.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.218 "is_configured": false, 00:18:31.218 "data_offset": 0, 00:18:31.218 "data_size": 0 00:18:31.218 } 00:18:31.218 ] 00:18:31.218 }' 00:18:31.218 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.218 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.475 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.476 [2024-11-08 17:06:07.966564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.476 [2024-11-08 17:06:07.966750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.476 [2024-11-08 17:06:07.978622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.476 [2024-11-08 17:06:07.980729] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.476 [2024-11-08 17:06:07.980874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.476 17:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.476 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.476 "name": "Existed_Raid", 00:18:31.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.476 "strip_size_kb": 64, 00:18:31.476 "state": "configuring", 00:18:31.476 "raid_level": "raid0", 00:18:31.476 "superblock": false, 00:18:31.476 "num_base_bdevs": 2, 00:18:31.476 "num_base_bdevs_discovered": 1, 00:18:31.476 "num_base_bdevs_operational": 2, 00:18:31.476 "base_bdevs_list": [ 00:18:31.476 { 00:18:31.476 "name": "BaseBdev1", 00:18:31.476 "uuid": "c7c38149-a740-405b-92ae-7be516e79e8c", 00:18:31.476 "is_configured": true, 00:18:31.476 "data_offset": 0, 00:18:31.476 "data_size": 65536 00:18:31.476 }, 00:18:31.476 { 00:18:31.476 "name": "BaseBdev2", 00:18:31.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.476 "is_configured": false, 00:18:31.476 "data_offset": 0, 00:18:31.476 "data_size": 0 00:18:31.476 } 00:18:31.476 ] 00:18:31.476 }' 00:18:31.476 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.476 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.733 [2024-11-08 17:06:08.327964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.733 [2024-11-08 17:06:08.328022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:31.733 [2024-11-08 17:06:08.328031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:31.733 [2024-11-08 17:06:08.328308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:31.733 [2024-11-08 17:06:08.328454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:31.733 [2024-11-08 17:06:08.328468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:31.733 [2024-11-08 17:06:08.328730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.733 BaseBdev2 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.733 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.733 [ 00:18:31.733 { 00:18:31.733 "name": "BaseBdev2", 00:18:31.734 "aliases": [ 00:18:31.734 "1c0fd09f-7eb4-4526-9b07-50955c706fa8" 00:18:31.734 ], 00:18:31.734 "product_name": "Malloc disk", 00:18:31.734 "block_size": 512, 00:18:31.734 "num_blocks": 65536, 00:18:31.734 "uuid": "1c0fd09f-7eb4-4526-9b07-50955c706fa8", 00:18:31.734 "assigned_rate_limits": { 00:18:31.734 "rw_ios_per_sec": 0, 00:18:31.734 "rw_mbytes_per_sec": 0, 00:18:31.734 "r_mbytes_per_sec": 0, 00:18:31.734 "w_mbytes_per_sec": 0 00:18:31.734 }, 00:18:31.734 "claimed": true, 00:18:31.734 "claim_type": "exclusive_write", 00:18:31.734 "zoned": false, 00:18:31.734 "supported_io_types": { 00:18:31.734 "read": true, 00:18:31.734 "write": true, 00:18:31.734 "unmap": true, 00:18:31.734 "flush": true, 00:18:31.734 "reset": true, 00:18:31.734 "nvme_admin": false, 00:18:31.734 "nvme_io": false, 00:18:31.734 "nvme_io_md": false, 00:18:31.734 "write_zeroes": true, 00:18:31.734 "zcopy": true, 00:18:31.734 "get_zone_info": false, 00:18:31.734 "zone_management": false, 00:18:31.734 "zone_append": false, 00:18:31.734 "compare": false, 00:18:31.734 "compare_and_write": false, 00:18:31.734 "abort": true, 00:18:31.734 "seek_hole": false, 00:18:31.734 "seek_data": false, 00:18:31.734 "copy": true, 00:18:31.734 "nvme_iov_md": false 00:18:31.734 }, 00:18:31.734 "memory_domains": [ 00:18:31.734 { 00:18:31.734 "dma_device_id": "system", 00:18:31.734 "dma_device_type": 1 00:18:31.734 }, 00:18:31.734 { 00:18:31.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.734 "dma_device_type": 2 00:18:31.734 } 00:18:31.734 ], 00:18:31.734 "driver_specific": {} 00:18:31.734 } 00:18:31.734 ] 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.734 "name": "Existed_Raid", 00:18:31.734 "uuid": "8c90fc2b-3110-4450-bcde-8d30289868c1", 00:18:31.734 "strip_size_kb": 64, 00:18:31.734 "state": "online", 00:18:31.734 "raid_level": "raid0", 00:18:31.734 "superblock": false, 00:18:31.734 "num_base_bdevs": 2, 00:18:31.734 "num_base_bdevs_discovered": 2, 00:18:31.734 "num_base_bdevs_operational": 2, 00:18:31.734 "base_bdevs_list": [ 00:18:31.734 { 00:18:31.734 "name": "BaseBdev1", 00:18:31.734 "uuid": "c7c38149-a740-405b-92ae-7be516e79e8c", 00:18:31.734 "is_configured": true, 00:18:31.734 "data_offset": 0, 00:18:31.734 "data_size": 65536 00:18:31.734 }, 00:18:31.734 { 00:18:31.734 "name": "BaseBdev2", 00:18:31.734 "uuid": "1c0fd09f-7eb4-4526-9b07-50955c706fa8", 00:18:31.734 "is_configured": true, 00:18:31.734 "data_offset": 0, 00:18:31.734 "data_size": 65536 00:18:31.734 } 00:18:31.734 ] 00:18:31.734 }' 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.734 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.056 [2024-11-08 17:06:08.700419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.056 "name": "Existed_Raid", 00:18:32.056 "aliases": [ 00:18:32.056 "8c90fc2b-3110-4450-bcde-8d30289868c1" 00:18:32.056 ], 00:18:32.056 "product_name": "Raid Volume", 00:18:32.056 "block_size": 512, 00:18:32.056 "num_blocks": 131072, 00:18:32.056 "uuid": "8c90fc2b-3110-4450-bcde-8d30289868c1", 00:18:32.056 "assigned_rate_limits": { 00:18:32.056 "rw_ios_per_sec": 0, 00:18:32.056 "rw_mbytes_per_sec": 0, 00:18:32.056 "r_mbytes_per_sec": 0, 00:18:32.056 "w_mbytes_per_sec": 0 00:18:32.056 }, 00:18:32.056 "claimed": false, 00:18:32.056 "zoned": false, 00:18:32.056 "supported_io_types": { 00:18:32.056 "read": true, 00:18:32.056 "write": true, 00:18:32.056 "unmap": true, 00:18:32.056 "flush": true, 00:18:32.056 "reset": true, 00:18:32.056 "nvme_admin": false, 00:18:32.056 "nvme_io": false, 00:18:32.056 "nvme_io_md": false, 00:18:32.056 "write_zeroes": true, 00:18:32.056 "zcopy": false, 00:18:32.056 "get_zone_info": false, 00:18:32.056 "zone_management": false, 00:18:32.056 "zone_append": false, 00:18:32.056 "compare": false, 00:18:32.056 "compare_and_write": false, 00:18:32.056 "abort": false, 00:18:32.056 "seek_hole": false, 00:18:32.056 "seek_data": false, 00:18:32.056 "copy": false, 00:18:32.056 "nvme_iov_md": false 00:18:32.056 }, 00:18:32.056 "memory_domains": [ 00:18:32.056 { 00:18:32.056 "dma_device_id": "system", 00:18:32.056 "dma_device_type": 1 00:18:32.056 }, 00:18:32.056 { 00:18:32.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.056 "dma_device_type": 2 00:18:32.056 }, 00:18:32.056 { 00:18:32.056 "dma_device_id": "system", 00:18:32.056 "dma_device_type": 1 00:18:32.056 }, 00:18:32.056 { 00:18:32.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.056 "dma_device_type": 2 00:18:32.056 } 00:18:32.056 ], 00:18:32.056 "driver_specific": { 00:18:32.056 "raid": { 00:18:32.056 "uuid": "8c90fc2b-3110-4450-bcde-8d30289868c1", 00:18:32.056 "strip_size_kb": 64, 00:18:32.056 "state": "online", 00:18:32.056 "raid_level": "raid0", 00:18:32.056 "superblock": false, 00:18:32.056 "num_base_bdevs": 2, 00:18:32.056 "num_base_bdevs_discovered": 2, 00:18:32.056 "num_base_bdevs_operational": 2, 00:18:32.056 "base_bdevs_list": [ 00:18:32.056 { 00:18:32.056 "name": "BaseBdev1", 00:18:32.056 "uuid": "c7c38149-a740-405b-92ae-7be516e79e8c", 00:18:32.056 "is_configured": true, 00:18:32.056 "data_offset": 0, 00:18:32.056 "data_size": 65536 00:18:32.056 }, 00:18:32.056 { 00:18:32.056 "name": "BaseBdev2", 00:18:32.056 "uuid": "1c0fd09f-7eb4-4526-9b07-50955c706fa8", 00:18:32.056 "is_configured": true, 00:18:32.056 "data_offset": 0, 00:18:32.056 "data_size": 65536 00:18:32.056 } 00:18:32.056 ] 00:18:32.056 } 00:18:32.056 } 00:18:32.056 }' 00:18:32.056 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:32.319 BaseBdev2' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.319 [2024-11-08 17:06:08.872206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.319 [2024-11-08 17:06:08.872241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.319 [2024-11-08 17:06:08.872301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.319 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.320 "name": "Existed_Raid", 00:18:32.320 "uuid": "8c90fc2b-3110-4450-bcde-8d30289868c1", 00:18:32.320 "strip_size_kb": 64, 00:18:32.320 "state": "offline", 00:18:32.320 "raid_level": "raid0", 00:18:32.320 "superblock": false, 00:18:32.320 "num_base_bdevs": 2, 00:18:32.320 "num_base_bdevs_discovered": 1, 00:18:32.320 "num_base_bdevs_operational": 1, 00:18:32.320 "base_bdevs_list": [ 00:18:32.320 { 00:18:32.320 "name": null, 00:18:32.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.320 "is_configured": false, 00:18:32.320 "data_offset": 0, 00:18:32.320 "data_size": 65536 00:18:32.320 }, 00:18:32.320 { 00:18:32.320 "name": "BaseBdev2", 00:18:32.320 "uuid": "1c0fd09f-7eb4-4526-9b07-50955c706fa8", 00:18:32.320 "is_configured": true, 00:18:32.320 "data_offset": 0, 00:18:32.320 "data_size": 65536 00:18:32.320 } 00:18:32.320 ] 00:18:32.320 }' 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.320 17:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.577 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:32.577 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:32.577 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:32.577 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.577 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.577 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.577 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.578 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:32.578 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.578 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:32.578 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.578 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.578 [2024-11-08 17:06:09.279990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:32.578 [2024-11-08 17:06:09.280052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59733 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 59733 ']' 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 59733 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59733 00:18:32.834 killing process with pid 59733 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59733' 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 59733 00:18:32.834 17:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 59733 00:18:32.834 [2024-11-08 17:06:09.404592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.834 [2024-11-08 17:06:09.415939] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.767 ************************************ 00:18:33.767 END TEST raid_state_function_test 00:18:33.767 ************************************ 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:33.767 00:18:33.767 real 0m3.845s 00:18:33.767 user 0m5.463s 00:18:33.767 sys 0m0.635s 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.767 17:06:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:18:33.767 17:06:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:33.767 17:06:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:33.767 17:06:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.767 ************************************ 00:18:33.767 START TEST raid_state_function_test_sb 00:18:33.767 ************************************ 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:33.767 Process raid pid: 59970 00:18:33.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59970 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59970' 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59970 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 59970 ']' 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.767 17:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:33.767 [2024-11-08 17:06:10.333368] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:33.767 [2024-11-08 17:06:10.333565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.024 [2024-11-08 17:06:10.500123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.024 [2024-11-08 17:06:10.620955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.282 [2024-11-08 17:06:10.773546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.282 [2024-11-08 17:06:10.773599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.539 [2024-11-08 17:06:11.190914] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.539 [2024-11-08 17:06:11.191098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.539 [2024-11-08 17:06:11.191165] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.539 [2024-11-08 17:06:11.191193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.539 "name": "Existed_Raid", 00:18:34.539 "uuid": "892bacf0-0ff9-43ff-a4c9-66e356558d61", 00:18:34.539 "strip_size_kb": 64, 00:18:34.539 "state": "configuring", 00:18:34.539 "raid_level": "raid0", 00:18:34.539 "superblock": true, 00:18:34.539 "num_base_bdevs": 2, 00:18:34.539 "num_base_bdevs_discovered": 0, 00:18:34.539 "num_base_bdevs_operational": 2, 00:18:34.539 "base_bdevs_list": [ 00:18:34.539 { 00:18:34.539 "name": "BaseBdev1", 00:18:34.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.539 "is_configured": false, 00:18:34.539 "data_offset": 0, 00:18:34.539 "data_size": 0 00:18:34.539 }, 00:18:34.539 { 00:18:34.539 "name": "BaseBdev2", 00:18:34.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.539 "is_configured": false, 00:18:34.539 "data_offset": 0, 00:18:34.539 "data_size": 0 00:18:34.539 } 00:18:34.539 ] 00:18:34.539 }' 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.539 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 [2024-11-08 17:06:11.538977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.115 [2024-11-08 17:06:11.539203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 [2024-11-08 17:06:11.546968] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:35.115 [2024-11-08 17:06:11.547020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:35.115 [2024-11-08 17:06:11.547030] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.115 [2024-11-08 17:06:11.547042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 [2024-11-08 17:06:11.582833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.115 BaseBdev1 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 [ 00:18:35.115 { 00:18:35.115 "name": "BaseBdev1", 00:18:35.115 "aliases": [ 00:18:35.115 "d268f7b1-f25b-4051-85b2-f787b4ea6885" 00:18:35.115 ], 00:18:35.115 "product_name": "Malloc disk", 00:18:35.115 "block_size": 512, 00:18:35.115 "num_blocks": 65536, 00:18:35.115 "uuid": "d268f7b1-f25b-4051-85b2-f787b4ea6885", 00:18:35.115 "assigned_rate_limits": { 00:18:35.115 "rw_ios_per_sec": 0, 00:18:35.115 "rw_mbytes_per_sec": 0, 00:18:35.115 "r_mbytes_per_sec": 0, 00:18:35.115 "w_mbytes_per_sec": 0 00:18:35.115 }, 00:18:35.115 "claimed": true, 00:18:35.115 "claim_type": "exclusive_write", 00:18:35.115 "zoned": false, 00:18:35.115 "supported_io_types": { 00:18:35.115 "read": true, 00:18:35.115 "write": true, 00:18:35.115 "unmap": true, 00:18:35.115 "flush": true, 00:18:35.115 "reset": true, 00:18:35.115 "nvme_admin": false, 00:18:35.115 "nvme_io": false, 00:18:35.115 "nvme_io_md": false, 00:18:35.115 "write_zeroes": true, 00:18:35.115 "zcopy": true, 00:18:35.115 "get_zone_info": false, 00:18:35.115 "zone_management": false, 00:18:35.115 "zone_append": false, 00:18:35.115 "compare": false, 00:18:35.115 "compare_and_write": false, 00:18:35.115 "abort": true, 00:18:35.115 "seek_hole": false, 00:18:35.115 "seek_data": false, 00:18:35.115 "copy": true, 00:18:35.115 "nvme_iov_md": false 00:18:35.115 }, 00:18:35.115 "memory_domains": [ 00:18:35.115 { 00:18:35.115 "dma_device_id": "system", 00:18:35.115 "dma_device_type": 1 00:18:35.115 }, 00:18:35.115 { 00:18:35.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.115 "dma_device_type": 2 00:18:35.115 } 00:18:35.115 ], 00:18:35.115 "driver_specific": {} 00:18:35.115 } 00:18:35.115 ] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.115 "name": "Existed_Raid", 00:18:35.115 "uuid": "da8183bc-ccee-4b25-b06b-468bb635b761", 00:18:35.115 "strip_size_kb": 64, 00:18:35.115 "state": "configuring", 00:18:35.115 "raid_level": "raid0", 00:18:35.115 "superblock": true, 00:18:35.115 "num_base_bdevs": 2, 00:18:35.115 "num_base_bdevs_discovered": 1, 00:18:35.115 "num_base_bdevs_operational": 2, 00:18:35.115 "base_bdevs_list": [ 00:18:35.115 { 00:18:35.115 "name": "BaseBdev1", 00:18:35.115 "uuid": "d268f7b1-f25b-4051-85b2-f787b4ea6885", 00:18:35.115 "is_configured": true, 00:18:35.115 "data_offset": 2048, 00:18:35.115 "data_size": 63488 00:18:35.115 }, 00:18:35.115 { 00:18:35.115 "name": "BaseBdev2", 00:18:35.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.115 "is_configured": false, 00:18:35.115 "data_offset": 0, 00:18:35.115 "data_size": 0 00:18:35.115 } 00:18:35.115 ] 00:18:35.115 }' 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.115 17:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.373 [2024-11-08 17:06:12.050996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.373 [2024-11-08 17:06:12.051245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.373 [2024-11-08 17:06:12.059117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.373 [2024-11-08 17:06:12.061301] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.373 [2024-11-08 17:06:12.061433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.373 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.631 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.631 "name": "Existed_Raid", 00:18:35.631 "uuid": "d146560f-0e4f-4405-9a04-c332dc1548b0", 00:18:35.631 "strip_size_kb": 64, 00:18:35.631 "state": "configuring", 00:18:35.631 "raid_level": "raid0", 00:18:35.631 "superblock": true, 00:18:35.631 "num_base_bdevs": 2, 00:18:35.631 "num_base_bdevs_discovered": 1, 00:18:35.631 "num_base_bdevs_operational": 2, 00:18:35.631 "base_bdevs_list": [ 00:18:35.631 { 00:18:35.631 "name": "BaseBdev1", 00:18:35.631 "uuid": "d268f7b1-f25b-4051-85b2-f787b4ea6885", 00:18:35.631 "is_configured": true, 00:18:35.631 "data_offset": 2048, 00:18:35.631 "data_size": 63488 00:18:35.631 }, 00:18:35.631 { 00:18:35.631 "name": "BaseBdev2", 00:18:35.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.631 "is_configured": false, 00:18:35.631 "data_offset": 0, 00:18:35.631 "data_size": 0 00:18:35.631 } 00:18:35.631 ] 00:18:35.631 }' 00:18:35.631 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.631 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.889 [2024-11-08 17:06:12.452734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.889 [2024-11-08 17:06:12.453073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:35.889 [2024-11-08 17:06:12.453089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:35.889 [2024-11-08 17:06:12.453374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:35.889 [2024-11-08 17:06:12.453518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:35.889 [2024-11-08 17:06:12.453529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:35.889 BaseBdev2 00:18:35.889 [2024-11-08 17:06:12.453663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.889 [ 00:18:35.889 { 00:18:35.889 "name": "BaseBdev2", 00:18:35.889 "aliases": [ 00:18:35.889 "2763008d-9958-4a3f-ae85-356e56d433c9" 00:18:35.889 ], 00:18:35.889 "product_name": "Malloc disk", 00:18:35.889 "block_size": 512, 00:18:35.889 "num_blocks": 65536, 00:18:35.889 "uuid": "2763008d-9958-4a3f-ae85-356e56d433c9", 00:18:35.889 "assigned_rate_limits": { 00:18:35.889 "rw_ios_per_sec": 0, 00:18:35.889 "rw_mbytes_per_sec": 0, 00:18:35.889 "r_mbytes_per_sec": 0, 00:18:35.889 "w_mbytes_per_sec": 0 00:18:35.889 }, 00:18:35.889 "claimed": true, 00:18:35.889 "claim_type": "exclusive_write", 00:18:35.889 "zoned": false, 00:18:35.889 "supported_io_types": { 00:18:35.889 "read": true, 00:18:35.889 "write": true, 00:18:35.889 "unmap": true, 00:18:35.889 "flush": true, 00:18:35.889 "reset": true, 00:18:35.889 "nvme_admin": false, 00:18:35.889 "nvme_io": false, 00:18:35.889 "nvme_io_md": false, 00:18:35.889 "write_zeroes": true, 00:18:35.889 "zcopy": true, 00:18:35.889 "get_zone_info": false, 00:18:35.889 "zone_management": false, 00:18:35.889 "zone_append": false, 00:18:35.889 "compare": false, 00:18:35.889 "compare_and_write": false, 00:18:35.889 "abort": true, 00:18:35.889 "seek_hole": false, 00:18:35.889 "seek_data": false, 00:18:35.889 "copy": true, 00:18:35.889 "nvme_iov_md": false 00:18:35.889 }, 00:18:35.889 "memory_domains": [ 00:18:35.889 { 00:18:35.889 "dma_device_id": "system", 00:18:35.889 "dma_device_type": 1 00:18:35.889 }, 00:18:35.889 { 00:18:35.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.889 "dma_device_type": 2 00:18:35.889 } 00:18:35.889 ], 00:18:35.889 "driver_specific": {} 00:18:35.889 } 00:18:35.889 ] 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.889 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.889 "name": "Existed_Raid", 00:18:35.889 "uuid": "d146560f-0e4f-4405-9a04-c332dc1548b0", 00:18:35.889 "strip_size_kb": 64, 00:18:35.889 "state": "online", 00:18:35.890 "raid_level": "raid0", 00:18:35.890 "superblock": true, 00:18:35.890 "num_base_bdevs": 2, 00:18:35.890 "num_base_bdevs_discovered": 2, 00:18:35.890 "num_base_bdevs_operational": 2, 00:18:35.890 "base_bdevs_list": [ 00:18:35.890 { 00:18:35.890 "name": "BaseBdev1", 00:18:35.890 "uuid": "d268f7b1-f25b-4051-85b2-f787b4ea6885", 00:18:35.890 "is_configured": true, 00:18:35.890 "data_offset": 2048, 00:18:35.890 "data_size": 63488 00:18:35.890 }, 00:18:35.890 { 00:18:35.890 "name": "BaseBdev2", 00:18:35.890 "uuid": "2763008d-9958-4a3f-ae85-356e56d433c9", 00:18:35.890 "is_configured": true, 00:18:35.890 "data_offset": 2048, 00:18:35.890 "data_size": 63488 00:18:35.890 } 00:18:35.890 ] 00:18:35.890 }' 00:18:35.890 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.890 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.192 [2024-11-08 17:06:12.809205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.192 "name": "Existed_Raid", 00:18:36.192 "aliases": [ 00:18:36.192 "d146560f-0e4f-4405-9a04-c332dc1548b0" 00:18:36.192 ], 00:18:36.192 "product_name": "Raid Volume", 00:18:36.192 "block_size": 512, 00:18:36.192 "num_blocks": 126976, 00:18:36.192 "uuid": "d146560f-0e4f-4405-9a04-c332dc1548b0", 00:18:36.192 "assigned_rate_limits": { 00:18:36.192 "rw_ios_per_sec": 0, 00:18:36.192 "rw_mbytes_per_sec": 0, 00:18:36.192 "r_mbytes_per_sec": 0, 00:18:36.192 "w_mbytes_per_sec": 0 00:18:36.192 }, 00:18:36.192 "claimed": false, 00:18:36.192 "zoned": false, 00:18:36.192 "supported_io_types": { 00:18:36.192 "read": true, 00:18:36.192 "write": true, 00:18:36.192 "unmap": true, 00:18:36.192 "flush": true, 00:18:36.192 "reset": true, 00:18:36.192 "nvme_admin": false, 00:18:36.192 "nvme_io": false, 00:18:36.192 "nvme_io_md": false, 00:18:36.192 "write_zeroes": true, 00:18:36.192 "zcopy": false, 00:18:36.192 "get_zone_info": false, 00:18:36.192 "zone_management": false, 00:18:36.192 "zone_append": false, 00:18:36.192 "compare": false, 00:18:36.192 "compare_and_write": false, 00:18:36.192 "abort": false, 00:18:36.192 "seek_hole": false, 00:18:36.192 "seek_data": false, 00:18:36.192 "copy": false, 00:18:36.192 "nvme_iov_md": false 00:18:36.192 }, 00:18:36.192 "memory_domains": [ 00:18:36.192 { 00:18:36.192 "dma_device_id": "system", 00:18:36.192 "dma_device_type": 1 00:18:36.192 }, 00:18:36.192 { 00:18:36.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.192 "dma_device_type": 2 00:18:36.192 }, 00:18:36.192 { 00:18:36.192 "dma_device_id": "system", 00:18:36.192 "dma_device_type": 1 00:18:36.192 }, 00:18:36.192 { 00:18:36.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.192 "dma_device_type": 2 00:18:36.192 } 00:18:36.192 ], 00:18:36.192 "driver_specific": { 00:18:36.192 "raid": { 00:18:36.192 "uuid": "d146560f-0e4f-4405-9a04-c332dc1548b0", 00:18:36.192 "strip_size_kb": 64, 00:18:36.192 "state": "online", 00:18:36.192 "raid_level": "raid0", 00:18:36.192 "superblock": true, 00:18:36.192 "num_base_bdevs": 2, 00:18:36.192 "num_base_bdevs_discovered": 2, 00:18:36.192 "num_base_bdevs_operational": 2, 00:18:36.192 "base_bdevs_list": [ 00:18:36.192 { 00:18:36.192 "name": "BaseBdev1", 00:18:36.192 "uuid": "d268f7b1-f25b-4051-85b2-f787b4ea6885", 00:18:36.192 "is_configured": true, 00:18:36.192 "data_offset": 2048, 00:18:36.192 "data_size": 63488 00:18:36.192 }, 00:18:36.192 { 00:18:36.192 "name": "BaseBdev2", 00:18:36.192 "uuid": "2763008d-9958-4a3f-ae85-356e56d433c9", 00:18:36.192 "is_configured": true, 00:18:36.192 "data_offset": 2048, 00:18:36.192 "data_size": 63488 00:18:36.192 } 00:18:36.192 ] 00:18:36.192 } 00:18:36.192 } 00:18:36.192 }' 00:18:36.192 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:36.450 BaseBdev2' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.450 17:06:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.450 [2024-11-08 17:06:12.980995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.450 [2024-11-08 17:06:12.981037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.450 [2024-11-08 17:06:12.981095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.450 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.450 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.451 "name": "Existed_Raid", 00:18:36.451 "uuid": "d146560f-0e4f-4405-9a04-c332dc1548b0", 00:18:36.451 "strip_size_kb": 64, 00:18:36.451 "state": "offline", 00:18:36.451 "raid_level": "raid0", 00:18:36.451 "superblock": true, 00:18:36.451 "num_base_bdevs": 2, 00:18:36.451 "num_base_bdevs_discovered": 1, 00:18:36.451 "num_base_bdevs_operational": 1, 00:18:36.451 "base_bdevs_list": [ 00:18:36.451 { 00:18:36.451 "name": null, 00:18:36.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.451 "is_configured": false, 00:18:36.451 "data_offset": 0, 00:18:36.451 "data_size": 63488 00:18:36.451 }, 00:18:36.451 { 00:18:36.451 "name": "BaseBdev2", 00:18:36.451 "uuid": "2763008d-9958-4a3f-ae85-356e56d433c9", 00:18:36.451 "is_configured": true, 00:18:36.451 "data_offset": 2048, 00:18:36.451 "data_size": 63488 00:18:36.451 } 00:18:36.451 ] 00:18:36.451 }' 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.451 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.709 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.709 [2024-11-08 17:06:13.408313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:36.709 [2024-11-08 17:06:13.408382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59970 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 59970 ']' 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 59970 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59970 00:18:36.968 killing process with pid 59970 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59970' 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 59970 00:18:36.968 [2024-11-08 17:06:13.535667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.968 17:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 59970 00:18:36.968 [2024-11-08 17:06:13.546776] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.901 ************************************ 00:18:37.901 END TEST raid_state_function_test_sb 00:18:37.901 ************************************ 00:18:37.901 17:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:37.901 00:18:37.901 real 0m4.051s 00:18:37.901 user 0m5.812s 00:18:37.901 sys 0m0.680s 00:18:37.901 17:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:37.901 17:06:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.901 17:06:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:18:37.901 17:06:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:37.901 17:06:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:37.901 17:06:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.901 ************************************ 00:18:37.901 START TEST raid_superblock_test 00:18:37.901 ************************************ 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60211 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60211 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60211 ']' 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:37.901 17:06:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.901 [2024-11-08 17:06:14.449047] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:37.901 [2024-11-08 17:06:14.449473] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60211 ] 00:18:38.158 [2024-11-08 17:06:14.613851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.158 [2024-11-08 17:06:14.735005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.415 [2024-11-08 17:06:14.895268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.415 [2024-11-08 17:06:14.895542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.673 malloc1 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.673 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.673 [2024-11-08 17:06:15.362430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:38.673 [2024-11-08 17:06:15.362682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.673 [2024-11-08 17:06:15.362727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:38.673 [2024-11-08 17:06:15.362809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.673 [2024-11-08 17:06:15.365177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.673 [2024-11-08 17:06:15.365307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:38.673 pt1 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.674 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.931 malloc2 00:18:38.931 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.931 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.931 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.931 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.931 [2024-11-08 17:06:15.405005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.931 [2024-11-08 17:06:15.405072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.931 [2024-11-08 17:06:15.405096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:38.931 [2024-11-08 17:06:15.405105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.931 [2024-11-08 17:06:15.407423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.931 [2024-11-08 17:06:15.407459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.931 pt2 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.932 [2024-11-08 17:06:15.413053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:38.932 [2024-11-08 17:06:15.415204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.932 [2024-11-08 17:06:15.415367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:38.932 [2024-11-08 17:06:15.415379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:38.932 [2024-11-08 17:06:15.415660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:38.932 [2024-11-08 17:06:15.415829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:38.932 [2024-11-08 17:06:15.415843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:38.932 [2024-11-08 17:06:15.415989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.932 "name": "raid_bdev1", 00:18:38.932 "uuid": "f046baf2-ddcf-4263-b61b-b304fd9b9b5a", 00:18:38.932 "strip_size_kb": 64, 00:18:38.932 "state": "online", 00:18:38.932 "raid_level": "raid0", 00:18:38.932 "superblock": true, 00:18:38.932 "num_base_bdevs": 2, 00:18:38.932 "num_base_bdevs_discovered": 2, 00:18:38.932 "num_base_bdevs_operational": 2, 00:18:38.932 "base_bdevs_list": [ 00:18:38.932 { 00:18:38.932 "name": "pt1", 00:18:38.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.932 "is_configured": true, 00:18:38.932 "data_offset": 2048, 00:18:38.932 "data_size": 63488 00:18:38.932 }, 00:18:38.932 { 00:18:38.932 "name": "pt2", 00:18:38.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.932 "is_configured": true, 00:18:38.932 "data_offset": 2048, 00:18:38.932 "data_size": 63488 00:18:38.932 } 00:18:38.932 ] 00:18:38.932 }' 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.932 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.190 [2024-11-08 17:06:15.733406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.190 "name": "raid_bdev1", 00:18:39.190 "aliases": [ 00:18:39.190 "f046baf2-ddcf-4263-b61b-b304fd9b9b5a" 00:18:39.190 ], 00:18:39.190 "product_name": "Raid Volume", 00:18:39.190 "block_size": 512, 00:18:39.190 "num_blocks": 126976, 00:18:39.190 "uuid": "f046baf2-ddcf-4263-b61b-b304fd9b9b5a", 00:18:39.190 "assigned_rate_limits": { 00:18:39.190 "rw_ios_per_sec": 0, 00:18:39.190 "rw_mbytes_per_sec": 0, 00:18:39.190 "r_mbytes_per_sec": 0, 00:18:39.190 "w_mbytes_per_sec": 0 00:18:39.190 }, 00:18:39.190 "claimed": false, 00:18:39.190 "zoned": false, 00:18:39.190 "supported_io_types": { 00:18:39.190 "read": true, 00:18:39.190 "write": true, 00:18:39.190 "unmap": true, 00:18:39.190 "flush": true, 00:18:39.190 "reset": true, 00:18:39.190 "nvme_admin": false, 00:18:39.190 "nvme_io": false, 00:18:39.190 "nvme_io_md": false, 00:18:39.190 "write_zeroes": true, 00:18:39.190 "zcopy": false, 00:18:39.190 "get_zone_info": false, 00:18:39.190 "zone_management": false, 00:18:39.190 "zone_append": false, 00:18:39.190 "compare": false, 00:18:39.190 "compare_and_write": false, 00:18:39.190 "abort": false, 00:18:39.190 "seek_hole": false, 00:18:39.190 "seek_data": false, 00:18:39.190 "copy": false, 00:18:39.190 "nvme_iov_md": false 00:18:39.190 }, 00:18:39.190 "memory_domains": [ 00:18:39.190 { 00:18:39.190 "dma_device_id": "system", 00:18:39.190 "dma_device_type": 1 00:18:39.190 }, 00:18:39.190 { 00:18:39.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.190 "dma_device_type": 2 00:18:39.190 }, 00:18:39.190 { 00:18:39.190 "dma_device_id": "system", 00:18:39.190 "dma_device_type": 1 00:18:39.190 }, 00:18:39.190 { 00:18:39.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.190 "dma_device_type": 2 00:18:39.190 } 00:18:39.190 ], 00:18:39.190 "driver_specific": { 00:18:39.190 "raid": { 00:18:39.190 "uuid": "f046baf2-ddcf-4263-b61b-b304fd9b9b5a", 00:18:39.190 "strip_size_kb": 64, 00:18:39.190 "state": "online", 00:18:39.190 "raid_level": "raid0", 00:18:39.190 "superblock": true, 00:18:39.190 "num_base_bdevs": 2, 00:18:39.190 "num_base_bdevs_discovered": 2, 00:18:39.190 "num_base_bdevs_operational": 2, 00:18:39.190 "base_bdevs_list": [ 00:18:39.190 { 00:18:39.190 "name": "pt1", 00:18:39.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.190 "is_configured": true, 00:18:39.190 "data_offset": 2048, 00:18:39.190 "data_size": 63488 00:18:39.190 }, 00:18:39.190 { 00:18:39.190 "name": "pt2", 00:18:39.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.190 "is_configured": true, 00:18:39.190 "data_offset": 2048, 00:18:39.190 "data_size": 63488 00:18:39.190 } 00:18:39.190 ] 00:18:39.190 } 00:18:39.190 } 00:18:39.190 }' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:39.190 pt2' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.190 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:39.190 [2024-11-08 17:06:15.893439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f046baf2-ddcf-4263-b61b-b304fd9b9b5a 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f046baf2-ddcf-4263-b61b-b304fd9b9b5a ']' 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 [2024-11-08 17:06:15.925130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.449 [2024-11-08 17:06:15.925159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.449 [2024-11-08 17:06:15.925248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.449 [2024-11-08 17:06:15.925301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.449 [2024-11-08 17:06:15.925315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 17:06:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 [2024-11-08 17:06:16.025184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:39.449 [2024-11-08 17:06:16.027294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:39.449 [2024-11-08 17:06:16.027372] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:39.449 [2024-11-08 17:06:16.027428] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:39.449 [2024-11-08 17:06:16.027444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.449 [2024-11-08 17:06:16.027460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:39.449 request: 00:18:39.449 { 00:18:39.449 "name": "raid_bdev1", 00:18:39.449 "raid_level": "raid0", 00:18:39.449 "base_bdevs": [ 00:18:39.449 "malloc1", 00:18:39.449 "malloc2" 00:18:39.449 ], 00:18:39.449 "strip_size_kb": 64, 00:18:39.449 "superblock": false, 00:18:39.449 "method": "bdev_raid_create", 00:18:39.449 "req_id": 1 00:18:39.449 } 00:18:39.449 Got JSON-RPC error response 00:18:39.449 response: 00:18:39.449 { 00:18:39.449 "code": -17, 00:18:39.449 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:39.449 } 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 [2024-11-08 17:06:16.069170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.449 [2024-11-08 17:06:16.069237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.449 [2024-11-08 17:06:16.069260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:39.449 [2024-11-08 17:06:16.069272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.449 [2024-11-08 17:06:16.071663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.449 [2024-11-08 17:06:16.071833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.449 [2024-11-08 17:06:16.071942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:39.449 [2024-11-08 17:06:16.072008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.449 pt1 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.449 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.449 "name": "raid_bdev1", 00:18:39.449 "uuid": "f046baf2-ddcf-4263-b61b-b304fd9b9b5a", 00:18:39.449 "strip_size_kb": 64, 00:18:39.449 "state": "configuring", 00:18:39.449 "raid_level": "raid0", 00:18:39.449 "superblock": true, 00:18:39.449 "num_base_bdevs": 2, 00:18:39.449 "num_base_bdevs_discovered": 1, 00:18:39.449 "num_base_bdevs_operational": 2, 00:18:39.449 "base_bdevs_list": [ 00:18:39.449 { 00:18:39.450 "name": "pt1", 00:18:39.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.450 "is_configured": true, 00:18:39.450 "data_offset": 2048, 00:18:39.450 "data_size": 63488 00:18:39.450 }, 00:18:39.450 { 00:18:39.450 "name": null, 00:18:39.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.450 "is_configured": false, 00:18:39.450 "data_offset": 2048, 00:18:39.450 "data_size": 63488 00:18:39.450 } 00:18:39.450 ] 00:18:39.450 }' 00:18:39.450 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.450 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.708 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:39.708 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:39.708 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:39.708 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:39.708 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.708 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.708 [2024-11-08 17:06:16.421268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:39.708 [2024-11-08 17:06:16.421345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.708 [2024-11-08 17:06:16.421366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:39.708 [2024-11-08 17:06:16.421379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.980 [2024-11-08 17:06:16.421907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.980 [2024-11-08 17:06:16.421934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:39.980 [2024-11-08 17:06:16.422019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:39.980 [2024-11-08 17:06:16.422044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:39.980 [2024-11-08 17:06:16.422156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:39.980 [2024-11-08 17:06:16.422168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:39.980 [2024-11-08 17:06:16.422417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:39.980 [2024-11-08 17:06:16.422554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:39.980 [2024-11-08 17:06:16.422567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:39.980 [2024-11-08 17:06:16.422697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.980 pt2 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.980 "name": "raid_bdev1", 00:18:39.980 "uuid": "f046baf2-ddcf-4263-b61b-b304fd9b9b5a", 00:18:39.980 "strip_size_kb": 64, 00:18:39.980 "state": "online", 00:18:39.980 "raid_level": "raid0", 00:18:39.980 "superblock": true, 00:18:39.980 "num_base_bdevs": 2, 00:18:39.980 "num_base_bdevs_discovered": 2, 00:18:39.980 "num_base_bdevs_operational": 2, 00:18:39.980 "base_bdevs_list": [ 00:18:39.980 { 00:18:39.980 "name": "pt1", 00:18:39.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.980 "is_configured": true, 00:18:39.980 "data_offset": 2048, 00:18:39.980 "data_size": 63488 00:18:39.980 }, 00:18:39.980 { 00:18:39.980 "name": "pt2", 00:18:39.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.980 "is_configured": true, 00:18:39.980 "data_offset": 2048, 00:18:39.980 "data_size": 63488 00:18:39.980 } 00:18:39.980 ] 00:18:39.980 }' 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.980 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.239 [2024-11-08 17:06:16.745602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.239 "name": "raid_bdev1", 00:18:40.239 "aliases": [ 00:18:40.239 "f046baf2-ddcf-4263-b61b-b304fd9b9b5a" 00:18:40.239 ], 00:18:40.239 "product_name": "Raid Volume", 00:18:40.239 "block_size": 512, 00:18:40.239 "num_blocks": 126976, 00:18:40.239 "uuid": "f046baf2-ddcf-4263-b61b-b304fd9b9b5a", 00:18:40.239 "assigned_rate_limits": { 00:18:40.239 "rw_ios_per_sec": 0, 00:18:40.239 "rw_mbytes_per_sec": 0, 00:18:40.239 "r_mbytes_per_sec": 0, 00:18:40.239 "w_mbytes_per_sec": 0 00:18:40.239 }, 00:18:40.239 "claimed": false, 00:18:40.239 "zoned": false, 00:18:40.239 "supported_io_types": { 00:18:40.239 "read": true, 00:18:40.239 "write": true, 00:18:40.239 "unmap": true, 00:18:40.239 "flush": true, 00:18:40.239 "reset": true, 00:18:40.239 "nvme_admin": false, 00:18:40.239 "nvme_io": false, 00:18:40.239 "nvme_io_md": false, 00:18:40.239 "write_zeroes": true, 00:18:40.239 "zcopy": false, 00:18:40.239 "get_zone_info": false, 00:18:40.239 "zone_management": false, 00:18:40.239 "zone_append": false, 00:18:40.239 "compare": false, 00:18:40.239 "compare_and_write": false, 00:18:40.239 "abort": false, 00:18:40.239 "seek_hole": false, 00:18:40.239 "seek_data": false, 00:18:40.239 "copy": false, 00:18:40.239 "nvme_iov_md": false 00:18:40.239 }, 00:18:40.239 "memory_domains": [ 00:18:40.239 { 00:18:40.239 "dma_device_id": "system", 00:18:40.239 "dma_device_type": 1 00:18:40.239 }, 00:18:40.239 { 00:18:40.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.239 "dma_device_type": 2 00:18:40.239 }, 00:18:40.239 { 00:18:40.239 "dma_device_id": "system", 00:18:40.239 "dma_device_type": 1 00:18:40.239 }, 00:18:40.239 { 00:18:40.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.239 "dma_device_type": 2 00:18:40.239 } 00:18:40.239 ], 00:18:40.239 "driver_specific": { 00:18:40.239 "raid": { 00:18:40.239 "uuid": "f046baf2-ddcf-4263-b61b-b304fd9b9b5a", 00:18:40.239 "strip_size_kb": 64, 00:18:40.239 "state": "online", 00:18:40.239 "raid_level": "raid0", 00:18:40.239 "superblock": true, 00:18:40.239 "num_base_bdevs": 2, 00:18:40.239 "num_base_bdevs_discovered": 2, 00:18:40.239 "num_base_bdevs_operational": 2, 00:18:40.239 "base_bdevs_list": [ 00:18:40.239 { 00:18:40.239 "name": "pt1", 00:18:40.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.239 "is_configured": true, 00:18:40.239 "data_offset": 2048, 00:18:40.239 "data_size": 63488 00:18:40.239 }, 00:18:40.239 { 00:18:40.239 "name": "pt2", 00:18:40.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.239 "is_configured": true, 00:18:40.239 "data_offset": 2048, 00:18:40.239 "data_size": 63488 00:18:40.239 } 00:18:40.239 ] 00:18:40.239 } 00:18:40.239 } 00:18:40.239 }' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:40.239 pt2' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.239 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.240 [2024-11-08 17:06:16.905617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f046baf2-ddcf-4263-b61b-b304fd9b9b5a '!=' f046baf2-ddcf-4263-b61b-b304fd9b9b5a ']' 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60211 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60211 ']' 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60211 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:40.240 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60211 00:18:40.497 killing process with pid 60211 00:18:40.497 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:40.497 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:40.497 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60211' 00:18:40.497 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 60211 00:18:40.497 [2024-11-08 17:06:16.958660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:40.497 17:06:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 60211 00:18:40.497 [2024-11-08 17:06:16.958767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.498 [2024-11-08 17:06:16.958821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.498 [2024-11-08 17:06:16.958833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:40.498 [2024-11-08 17:06:17.094975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:41.432 17:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:41.432 00:18:41.432 real 0m3.466s 00:18:41.432 user 0m4.824s 00:18:41.432 sys 0m0.560s 00:18:41.432 17:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:41.432 ************************************ 00:18:41.432 END TEST raid_superblock_test 00:18:41.432 ************************************ 00:18:41.432 17:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.432 17:06:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:18:41.432 17:06:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:41.432 17:06:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:41.432 17:06:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.432 ************************************ 00:18:41.432 START TEST raid_read_error_test 00:18:41.432 ************************************ 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7JEsbRIUJA 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60417 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60417 00:18:41.432 17:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 60417 ']' 00:18:41.433 17:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.433 17:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:41.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.433 17:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.433 17:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:41.433 17:06:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:41.433 17:06:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.433 [2024-11-08 17:06:17.983551] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:41.433 [2024-11-08 17:06:17.983963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60417 ] 00:18:41.742 [2024-11-08 17:06:18.148410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.742 [2024-11-08 17:06:18.267923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.742 [2024-11-08 17:06:18.416833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.742 [2024-11-08 17:06:18.416886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 BaseBdev1_malloc 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 true 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 [2024-11-08 17:06:18.882386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:42.308 [2024-11-08 17:06:18.882446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.308 [2024-11-08 17:06:18.882468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:42.308 [2024-11-08 17:06:18.882481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.308 [2024-11-08 17:06:18.884788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.308 [2024-11-08 17:06:18.884826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:42.308 BaseBdev1 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 BaseBdev2_malloc 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 true 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 [2024-11-08 17:06:18.932482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:42.308 [2024-11-08 17:06:18.932634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.308 [2024-11-08 17:06:18.932659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:42.308 [2024-11-08 17:06:18.932672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.308 [2024-11-08 17:06:18.934992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.308 [2024-11-08 17:06:18.935029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:42.308 BaseBdev2 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 [2024-11-08 17:06:18.940551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.308 [2024-11-08 17:06:18.942537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.308 [2024-11-08 17:06:18.942733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:42.308 [2024-11-08 17:06:18.942749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:42.308 [2024-11-08 17:06:18.943038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:42.308 [2024-11-08 17:06:18.943205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:42.308 [2024-11-08 17:06:18.943216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:42.308 [2024-11-08 17:06:18.943364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.308 "name": "raid_bdev1", 00:18:42.308 "uuid": "c5dbdc3d-f104-49ef-b603-b0bc92fc7992", 00:18:42.308 "strip_size_kb": 64, 00:18:42.308 "state": "online", 00:18:42.308 "raid_level": "raid0", 00:18:42.308 "superblock": true, 00:18:42.308 "num_base_bdevs": 2, 00:18:42.308 "num_base_bdevs_discovered": 2, 00:18:42.308 "num_base_bdevs_operational": 2, 00:18:42.308 "base_bdevs_list": [ 00:18:42.308 { 00:18:42.308 "name": "BaseBdev1", 00:18:42.308 "uuid": "3794f418-492a-5a5b-904c-c9f18cc5c14d", 00:18:42.308 "is_configured": true, 00:18:42.308 "data_offset": 2048, 00:18:42.308 "data_size": 63488 00:18:42.308 }, 00:18:42.308 { 00:18:42.308 "name": "BaseBdev2", 00:18:42.308 "uuid": "35f83b95-5896-51c8-84ae-3823ac5f3876", 00:18:42.308 "is_configured": true, 00:18:42.308 "data_offset": 2048, 00:18:42.308 "data_size": 63488 00:18:42.308 } 00:18:42.308 ] 00:18:42.308 }' 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.308 17:06:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.566 17:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:42.566 17:06:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:42.824 [2024-11-08 17:06:19.357677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.758 "name": "raid_bdev1", 00:18:43.758 "uuid": "c5dbdc3d-f104-49ef-b603-b0bc92fc7992", 00:18:43.758 "strip_size_kb": 64, 00:18:43.758 "state": "online", 00:18:43.758 "raid_level": "raid0", 00:18:43.758 "superblock": true, 00:18:43.758 "num_base_bdevs": 2, 00:18:43.758 "num_base_bdevs_discovered": 2, 00:18:43.758 "num_base_bdevs_operational": 2, 00:18:43.758 "base_bdevs_list": [ 00:18:43.758 { 00:18:43.758 "name": "BaseBdev1", 00:18:43.758 "uuid": "3794f418-492a-5a5b-904c-c9f18cc5c14d", 00:18:43.758 "is_configured": true, 00:18:43.758 "data_offset": 2048, 00:18:43.758 "data_size": 63488 00:18:43.758 }, 00:18:43.758 { 00:18:43.758 "name": "BaseBdev2", 00:18:43.758 "uuid": "35f83b95-5896-51c8-84ae-3823ac5f3876", 00:18:43.758 "is_configured": true, 00:18:43.758 "data_offset": 2048, 00:18:43.758 "data_size": 63488 00:18:43.758 } 00:18:43.758 ] 00:18:43.758 }' 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.758 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.016 [2024-11-08 17:06:20.624004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.016 [2024-11-08 17:06:20.624163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.016 [2024-11-08 17:06:20.627428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.016 [2024-11-08 17:06:20.627563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.016 [2024-11-08 17:06:20.627625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.016 [2024-11-08 17:06:20.627705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:44.016 { 00:18:44.016 "results": [ 00:18:44.016 { 00:18:44.016 "job": "raid_bdev1", 00:18:44.016 "core_mask": "0x1", 00:18:44.016 "workload": "randrw", 00:18:44.016 "percentage": 50, 00:18:44.016 "status": "finished", 00:18:44.016 "queue_depth": 1, 00:18:44.016 "io_size": 131072, 00:18:44.016 "runtime": 1.264488, 00:18:44.016 "iops": 14060.23623790815, 00:18:44.016 "mibps": 1757.5295297385187, 00:18:44.016 "io_failed": 1, 00:18:44.016 "io_timeout": 0, 00:18:44.016 "avg_latency_us": 97.91775720342649, 00:18:44.016 "min_latency_us": 33.28, 00:18:44.016 "max_latency_us": 1751.8276923076924 00:18:44.016 } 00:18:44.016 ], 00:18:44.016 "core_count": 1 00:18:44.016 } 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60417 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 60417 ']' 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 60417 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60417 00:18:44.016 killing process with pid 60417 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60417' 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 60417 00:18:44.016 17:06:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 60417 00:18:44.016 [2024-11-08 17:06:20.656713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.274 [2024-11-08 17:06:20.751788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.838 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7JEsbRIUJA 00:18:44.838 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:44.838 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:44.838 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:18:44.838 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:18:44.838 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:44.839 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:44.839 17:06:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:18:44.839 00:18:44.839 real 0m3.645s 00:18:44.839 user 0m4.301s 00:18:44.839 sys 0m0.460s 00:18:44.839 ************************************ 00:18:44.839 END TEST raid_read_error_test 00:18:44.839 ************************************ 00:18:44.839 17:06:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:44.839 17:06:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.096 17:06:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:18:45.096 17:06:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:45.096 17:06:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:45.096 17:06:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.096 ************************************ 00:18:45.096 START TEST raid_write_error_test 00:18:45.096 ************************************ 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.r5oNHqixL3 00:18:45.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60546 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60546 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 60546 ']' 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:45.096 17:06:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.096 [2024-11-08 17:06:21.694442] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:45.096 [2024-11-08 17:06:21.694566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60546 ] 00:18:45.354 [2024-11-08 17:06:21.852435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.354 [2024-11-08 17:06:21.973503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.611 [2024-11-08 17:06:22.124015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.611 [2024-11-08 17:06:22.124097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 BaseBdev1_malloc 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 true 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 [2024-11-08 17:06:22.698184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:46.176 [2024-11-08 17:06:22.698259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.176 [2024-11-08 17:06:22.698287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:46.176 [2024-11-08 17:06:22.698301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.176 [2024-11-08 17:06:22.700811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.176 [2024-11-08 17:06:22.700859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:46.176 BaseBdev1 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 BaseBdev2_malloc 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 true 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 [2024-11-08 17:06:22.745021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:46.176 [2024-11-08 17:06:22.745095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.176 [2024-11-08 17:06:22.745116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:46.176 [2024-11-08 17:06:22.745129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.176 [2024-11-08 17:06:22.747565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.176 [2024-11-08 17:06:22.747609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:46.176 BaseBdev2 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 [2024-11-08 17:06:22.753106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.176 [2024-11-08 17:06:22.755210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.176 [2024-11-08 17:06:22.755422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:46.176 [2024-11-08 17:06:22.755440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:46.176 [2024-11-08 17:06:22.755730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:46.176 [2024-11-08 17:06:22.755927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:46.176 [2024-11-08 17:06:22.755951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:46.176 [2024-11-08 17:06:22.756135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.176 "name": "raid_bdev1", 00:18:46.176 "uuid": "32d1e4c9-f4cd-496b-8cd1-be0984f5eae5", 00:18:46.176 "strip_size_kb": 64, 00:18:46.176 "state": "online", 00:18:46.176 "raid_level": "raid0", 00:18:46.176 "superblock": true, 00:18:46.176 "num_base_bdevs": 2, 00:18:46.176 "num_base_bdevs_discovered": 2, 00:18:46.176 "num_base_bdevs_operational": 2, 00:18:46.176 "base_bdevs_list": [ 00:18:46.176 { 00:18:46.176 "name": "BaseBdev1", 00:18:46.176 "uuid": "318ee4f3-bcb6-509a-a767-4aa92a4e1114", 00:18:46.176 "is_configured": true, 00:18:46.176 "data_offset": 2048, 00:18:46.176 "data_size": 63488 00:18:46.176 }, 00:18:46.176 { 00:18:46.176 "name": "BaseBdev2", 00:18:46.176 "uuid": "79a9e941-09ca-5323-ad27-a8af12d1eff7", 00:18:46.176 "is_configured": true, 00:18:46.176 "data_offset": 2048, 00:18:46.176 "data_size": 63488 00:18:46.176 } 00:18:46.176 ] 00:18:46.176 }' 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.176 17:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.435 17:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:46.435 17:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:46.435 [2024-11-08 17:06:23.146401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.369 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.628 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.628 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.628 "name": "raid_bdev1", 00:18:47.628 "uuid": "32d1e4c9-f4cd-496b-8cd1-be0984f5eae5", 00:18:47.628 "strip_size_kb": 64, 00:18:47.628 "state": "online", 00:18:47.628 "raid_level": "raid0", 00:18:47.628 "superblock": true, 00:18:47.628 "num_base_bdevs": 2, 00:18:47.628 "num_base_bdevs_discovered": 2, 00:18:47.628 "num_base_bdevs_operational": 2, 00:18:47.628 "base_bdevs_list": [ 00:18:47.628 { 00:18:47.628 "name": "BaseBdev1", 00:18:47.628 "uuid": "318ee4f3-bcb6-509a-a767-4aa92a4e1114", 00:18:47.628 "is_configured": true, 00:18:47.628 "data_offset": 2048, 00:18:47.628 "data_size": 63488 00:18:47.628 }, 00:18:47.628 { 00:18:47.628 "name": "BaseBdev2", 00:18:47.628 "uuid": "79a9e941-09ca-5323-ad27-a8af12d1eff7", 00:18:47.628 "is_configured": true, 00:18:47.628 "data_offset": 2048, 00:18:47.628 "data_size": 63488 00:18:47.628 } 00:18:47.628 ] 00:18:47.628 }' 00:18:47.628 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.628 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.884 [2024-11-08 17:06:24.396943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.884 [2024-11-08 17:06:24.397175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.884 [2024-11-08 17:06:24.400372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.884 [2024-11-08 17:06:24.400551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.884 [2024-11-08 17:06:24.400596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.884 [2024-11-08 17:06:24.400609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:47.884 { 00:18:47.884 "results": [ 00:18:47.884 { 00:18:47.884 "job": "raid_bdev1", 00:18:47.884 "core_mask": "0x1", 00:18:47.884 "workload": "randrw", 00:18:47.884 "percentage": 50, 00:18:47.884 "status": "finished", 00:18:47.884 "queue_depth": 1, 00:18:47.884 "io_size": 131072, 00:18:47.884 "runtime": 1.248639, 00:18:47.884 "iops": 13892.72640050487, 00:18:47.884 "mibps": 1736.5908000631086, 00:18:47.884 "io_failed": 1, 00:18:47.884 "io_timeout": 0, 00:18:47.884 "avg_latency_us": 99.15326652595732, 00:18:47.884 "min_latency_us": 33.47692307692308, 00:18:47.884 "max_latency_us": 1852.6523076923077 00:18:47.884 } 00:18:47.884 ], 00:18:47.884 "core_count": 1 00:18:47.884 } 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60546 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 60546 ']' 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 60546 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60546 00:18:47.884 killing process with pid 60546 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60546' 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 60546 00:18:47.884 17:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 60546 00:18:47.884 [2024-11-08 17:06:24.433442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.884 [2024-11-08 17:06:24.525066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.r5oNHqixL3 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:48.862 ************************************ 00:18:48.862 END TEST raid_write_error_test 00:18:48.862 ************************************ 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:48.862 17:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:18:48.862 00:18:48.863 real 0m3.710s 00:18:48.863 user 0m4.456s 00:18:48.863 sys 0m0.444s 00:18:48.863 17:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:48.863 17:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.863 17:06:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:48.863 17:06:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:18:48.863 17:06:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:48.863 17:06:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:48.863 17:06:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.863 ************************************ 00:18:48.863 START TEST raid_state_function_test 00:18:48.863 ************************************ 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:48.863 Process raid pid: 60684 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60684 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60684' 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60684 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60684 ']' 00:18:48.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.863 17:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:48.863 [2024-11-08 17:06:25.471109] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:48.863 [2024-11-08 17:06:25.471574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.120 [2024-11-08 17:06:25.635123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.120 [2024-11-08 17:06:25.752611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.377 [2024-11-08 17:06:25.901783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.377 [2024-11-08 17:06:25.901833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.634 [2024-11-08 17:06:26.325052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:49.634 [2024-11-08 17:06:26.325109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:49.634 [2024-11-08 17:06:26.325120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.634 [2024-11-08 17:06:26.325129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.634 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.891 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.891 "name": "Existed_Raid", 00:18:49.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.891 "strip_size_kb": 64, 00:18:49.891 "state": "configuring", 00:18:49.891 "raid_level": "concat", 00:18:49.891 "superblock": false, 00:18:49.891 "num_base_bdevs": 2, 00:18:49.891 "num_base_bdevs_discovered": 0, 00:18:49.891 "num_base_bdevs_operational": 2, 00:18:49.891 "base_bdevs_list": [ 00:18:49.891 { 00:18:49.891 "name": "BaseBdev1", 00:18:49.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.891 "is_configured": false, 00:18:49.891 "data_offset": 0, 00:18:49.891 "data_size": 0 00:18:49.891 }, 00:18:49.891 { 00:18:49.891 "name": "BaseBdev2", 00:18:49.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.891 "is_configured": false, 00:18:49.891 "data_offset": 0, 00:18:49.891 "data_size": 0 00:18:49.891 } 00:18:49.891 ] 00:18:49.891 }' 00:18:49.891 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.891 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.148 [2024-11-08 17:06:26.645088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.148 [2024-11-08 17:06:26.645123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.148 [2024-11-08 17:06:26.653081] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.148 [2024-11-08 17:06:26.653123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.148 [2024-11-08 17:06:26.653133] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.148 [2024-11-08 17:06:26.653146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.148 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.149 [2024-11-08 17:06:26.687712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.149 BaseBdev1 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.149 [ 00:18:50.149 { 00:18:50.149 "name": "BaseBdev1", 00:18:50.149 "aliases": [ 00:18:50.149 "1919dcee-d56c-46ac-b6f2-af8b1f4d5785" 00:18:50.149 ], 00:18:50.149 "product_name": "Malloc disk", 00:18:50.149 "block_size": 512, 00:18:50.149 "num_blocks": 65536, 00:18:50.149 "uuid": "1919dcee-d56c-46ac-b6f2-af8b1f4d5785", 00:18:50.149 "assigned_rate_limits": { 00:18:50.149 "rw_ios_per_sec": 0, 00:18:50.149 "rw_mbytes_per_sec": 0, 00:18:50.149 "r_mbytes_per_sec": 0, 00:18:50.149 "w_mbytes_per_sec": 0 00:18:50.149 }, 00:18:50.149 "claimed": true, 00:18:50.149 "claim_type": "exclusive_write", 00:18:50.149 "zoned": false, 00:18:50.149 "supported_io_types": { 00:18:50.149 "read": true, 00:18:50.149 "write": true, 00:18:50.149 "unmap": true, 00:18:50.149 "flush": true, 00:18:50.149 "reset": true, 00:18:50.149 "nvme_admin": false, 00:18:50.149 "nvme_io": false, 00:18:50.149 "nvme_io_md": false, 00:18:50.149 "write_zeroes": true, 00:18:50.149 "zcopy": true, 00:18:50.149 "get_zone_info": false, 00:18:50.149 "zone_management": false, 00:18:50.149 "zone_append": false, 00:18:50.149 "compare": false, 00:18:50.149 "compare_and_write": false, 00:18:50.149 "abort": true, 00:18:50.149 "seek_hole": false, 00:18:50.149 "seek_data": false, 00:18:50.149 "copy": true, 00:18:50.149 "nvme_iov_md": false 00:18:50.149 }, 00:18:50.149 "memory_domains": [ 00:18:50.149 { 00:18:50.149 "dma_device_id": "system", 00:18:50.149 "dma_device_type": 1 00:18:50.149 }, 00:18:50.149 { 00:18:50.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.149 "dma_device_type": 2 00:18:50.149 } 00:18:50.149 ], 00:18:50.149 "driver_specific": {} 00:18:50.149 } 00:18:50.149 ] 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.149 "name": "Existed_Raid", 00:18:50.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.149 "strip_size_kb": 64, 00:18:50.149 "state": "configuring", 00:18:50.149 "raid_level": "concat", 00:18:50.149 "superblock": false, 00:18:50.149 "num_base_bdevs": 2, 00:18:50.149 "num_base_bdevs_discovered": 1, 00:18:50.149 "num_base_bdevs_operational": 2, 00:18:50.149 "base_bdevs_list": [ 00:18:50.149 { 00:18:50.149 "name": "BaseBdev1", 00:18:50.149 "uuid": "1919dcee-d56c-46ac-b6f2-af8b1f4d5785", 00:18:50.149 "is_configured": true, 00:18:50.149 "data_offset": 0, 00:18:50.149 "data_size": 65536 00:18:50.149 }, 00:18:50.149 { 00:18:50.149 "name": "BaseBdev2", 00:18:50.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.149 "is_configured": false, 00:18:50.149 "data_offset": 0, 00:18:50.149 "data_size": 0 00:18:50.149 } 00:18:50.149 ] 00:18:50.149 }' 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.149 17:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.406 [2024-11-08 17:06:27.039866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.406 [2024-11-08 17:06:27.039926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.406 [2024-11-08 17:06:27.047936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.406 [2024-11-08 17:06:27.049972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.406 [2024-11-08 17:06:27.050016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.406 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.407 "name": "Existed_Raid", 00:18:50.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.407 "strip_size_kb": 64, 00:18:50.407 "state": "configuring", 00:18:50.407 "raid_level": "concat", 00:18:50.407 "superblock": false, 00:18:50.407 "num_base_bdevs": 2, 00:18:50.407 "num_base_bdevs_discovered": 1, 00:18:50.407 "num_base_bdevs_operational": 2, 00:18:50.407 "base_bdevs_list": [ 00:18:50.407 { 00:18:50.407 "name": "BaseBdev1", 00:18:50.407 "uuid": "1919dcee-d56c-46ac-b6f2-af8b1f4d5785", 00:18:50.407 "is_configured": true, 00:18:50.407 "data_offset": 0, 00:18:50.407 "data_size": 65536 00:18:50.407 }, 00:18:50.407 { 00:18:50.407 "name": "BaseBdev2", 00:18:50.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.407 "is_configured": false, 00:18:50.407 "data_offset": 0, 00:18:50.407 "data_size": 0 00:18:50.407 } 00:18:50.407 ] 00:18:50.407 }' 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.407 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.664 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:50.664 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.664 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.922 [2024-11-08 17:06:27.404507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.922 [2024-11-08 17:06:27.404782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:50.922 [2024-11-08 17:06:27.404817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:50.922 [2024-11-08 17:06:27.405162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:50.922 [2024-11-08 17:06:27.405351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:50.922 [2024-11-08 17:06:27.405434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:50.922 [2024-11-08 17:06:27.405746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.922 BaseBdev2 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.922 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.922 [ 00:18:50.922 { 00:18:50.922 "name": "BaseBdev2", 00:18:50.922 "aliases": [ 00:18:50.922 "79c06479-be26-4ded-ac07-1552855d3db4" 00:18:50.922 ], 00:18:50.922 "product_name": "Malloc disk", 00:18:50.922 "block_size": 512, 00:18:50.922 "num_blocks": 65536, 00:18:50.923 "uuid": "79c06479-be26-4ded-ac07-1552855d3db4", 00:18:50.923 "assigned_rate_limits": { 00:18:50.923 "rw_ios_per_sec": 0, 00:18:50.923 "rw_mbytes_per_sec": 0, 00:18:50.923 "r_mbytes_per_sec": 0, 00:18:50.923 "w_mbytes_per_sec": 0 00:18:50.923 }, 00:18:50.923 "claimed": true, 00:18:50.923 "claim_type": "exclusive_write", 00:18:50.923 "zoned": false, 00:18:50.923 "supported_io_types": { 00:18:50.923 "read": true, 00:18:50.923 "write": true, 00:18:50.923 "unmap": true, 00:18:50.923 "flush": true, 00:18:50.923 "reset": true, 00:18:50.923 "nvme_admin": false, 00:18:50.923 "nvme_io": false, 00:18:50.923 "nvme_io_md": false, 00:18:50.923 "write_zeroes": true, 00:18:50.923 "zcopy": true, 00:18:50.923 "get_zone_info": false, 00:18:50.923 "zone_management": false, 00:18:50.923 "zone_append": false, 00:18:50.923 "compare": false, 00:18:50.923 "compare_and_write": false, 00:18:50.923 "abort": true, 00:18:50.923 "seek_hole": false, 00:18:50.923 "seek_data": false, 00:18:50.923 "copy": true, 00:18:50.923 "nvme_iov_md": false 00:18:50.923 }, 00:18:50.923 "memory_domains": [ 00:18:50.923 { 00:18:50.923 "dma_device_id": "system", 00:18:50.923 "dma_device_type": 1 00:18:50.923 }, 00:18:50.923 { 00:18:50.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.923 "dma_device_type": 2 00:18:50.923 } 00:18:50.923 ], 00:18:50.923 "driver_specific": {} 00:18:50.923 } 00:18:50.923 ] 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.923 "name": "Existed_Raid", 00:18:50.923 "uuid": "b227dbae-c4f4-4aa7-ac24-362cfa1f0fbd", 00:18:50.923 "strip_size_kb": 64, 00:18:50.923 "state": "online", 00:18:50.923 "raid_level": "concat", 00:18:50.923 "superblock": false, 00:18:50.923 "num_base_bdevs": 2, 00:18:50.923 "num_base_bdevs_discovered": 2, 00:18:50.923 "num_base_bdevs_operational": 2, 00:18:50.923 "base_bdevs_list": [ 00:18:50.923 { 00:18:50.923 "name": "BaseBdev1", 00:18:50.923 "uuid": "1919dcee-d56c-46ac-b6f2-af8b1f4d5785", 00:18:50.923 "is_configured": true, 00:18:50.923 "data_offset": 0, 00:18:50.923 "data_size": 65536 00:18:50.923 }, 00:18:50.923 { 00:18:50.923 "name": "BaseBdev2", 00:18:50.923 "uuid": "79c06479-be26-4ded-ac07-1552855d3db4", 00:18:50.923 "is_configured": true, 00:18:50.923 "data_offset": 0, 00:18:50.923 "data_size": 65536 00:18:50.923 } 00:18:50.923 ] 00:18:50.923 }' 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.923 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.181 [2024-11-08 17:06:27.768969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:51.181 "name": "Existed_Raid", 00:18:51.181 "aliases": [ 00:18:51.181 "b227dbae-c4f4-4aa7-ac24-362cfa1f0fbd" 00:18:51.181 ], 00:18:51.181 "product_name": "Raid Volume", 00:18:51.181 "block_size": 512, 00:18:51.181 "num_blocks": 131072, 00:18:51.181 "uuid": "b227dbae-c4f4-4aa7-ac24-362cfa1f0fbd", 00:18:51.181 "assigned_rate_limits": { 00:18:51.181 "rw_ios_per_sec": 0, 00:18:51.181 "rw_mbytes_per_sec": 0, 00:18:51.181 "r_mbytes_per_sec": 0, 00:18:51.181 "w_mbytes_per_sec": 0 00:18:51.181 }, 00:18:51.181 "claimed": false, 00:18:51.181 "zoned": false, 00:18:51.181 "supported_io_types": { 00:18:51.181 "read": true, 00:18:51.181 "write": true, 00:18:51.181 "unmap": true, 00:18:51.181 "flush": true, 00:18:51.181 "reset": true, 00:18:51.181 "nvme_admin": false, 00:18:51.181 "nvme_io": false, 00:18:51.181 "nvme_io_md": false, 00:18:51.181 "write_zeroes": true, 00:18:51.181 "zcopy": false, 00:18:51.181 "get_zone_info": false, 00:18:51.181 "zone_management": false, 00:18:51.181 "zone_append": false, 00:18:51.181 "compare": false, 00:18:51.181 "compare_and_write": false, 00:18:51.181 "abort": false, 00:18:51.181 "seek_hole": false, 00:18:51.181 "seek_data": false, 00:18:51.181 "copy": false, 00:18:51.181 "nvme_iov_md": false 00:18:51.181 }, 00:18:51.181 "memory_domains": [ 00:18:51.181 { 00:18:51.181 "dma_device_id": "system", 00:18:51.181 "dma_device_type": 1 00:18:51.181 }, 00:18:51.181 { 00:18:51.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.181 "dma_device_type": 2 00:18:51.181 }, 00:18:51.181 { 00:18:51.181 "dma_device_id": "system", 00:18:51.181 "dma_device_type": 1 00:18:51.181 }, 00:18:51.181 { 00:18:51.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.181 "dma_device_type": 2 00:18:51.181 } 00:18:51.181 ], 00:18:51.181 "driver_specific": { 00:18:51.181 "raid": { 00:18:51.181 "uuid": "b227dbae-c4f4-4aa7-ac24-362cfa1f0fbd", 00:18:51.181 "strip_size_kb": 64, 00:18:51.181 "state": "online", 00:18:51.181 "raid_level": "concat", 00:18:51.181 "superblock": false, 00:18:51.181 "num_base_bdevs": 2, 00:18:51.181 "num_base_bdevs_discovered": 2, 00:18:51.181 "num_base_bdevs_operational": 2, 00:18:51.181 "base_bdevs_list": [ 00:18:51.181 { 00:18:51.181 "name": "BaseBdev1", 00:18:51.181 "uuid": "1919dcee-d56c-46ac-b6f2-af8b1f4d5785", 00:18:51.181 "is_configured": true, 00:18:51.181 "data_offset": 0, 00:18:51.181 "data_size": 65536 00:18:51.181 }, 00:18:51.181 { 00:18:51.181 "name": "BaseBdev2", 00:18:51.181 "uuid": "79c06479-be26-4ded-ac07-1552855d3db4", 00:18:51.181 "is_configured": true, 00:18:51.181 "data_offset": 0, 00:18:51.181 "data_size": 65536 00:18:51.181 } 00:18:51.181 ] 00:18:51.181 } 00:18:51.181 } 00:18:51.181 }' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:51.181 BaseBdev2' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.181 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.439 [2024-11-08 17:06:27.920751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.439 [2024-11-08 17:06:27.920797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.439 [2024-11-08 17:06:27.920854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.439 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.440 17:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.440 17:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.440 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.440 "name": "Existed_Raid", 00:18:51.440 "uuid": "b227dbae-c4f4-4aa7-ac24-362cfa1f0fbd", 00:18:51.440 "strip_size_kb": 64, 00:18:51.440 "state": "offline", 00:18:51.440 "raid_level": "concat", 00:18:51.440 "superblock": false, 00:18:51.440 "num_base_bdevs": 2, 00:18:51.440 "num_base_bdevs_discovered": 1, 00:18:51.440 "num_base_bdevs_operational": 1, 00:18:51.440 "base_bdevs_list": [ 00:18:51.440 { 00:18:51.440 "name": null, 00:18:51.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.440 "is_configured": false, 00:18:51.440 "data_offset": 0, 00:18:51.440 "data_size": 65536 00:18:51.440 }, 00:18:51.440 { 00:18:51.440 "name": "BaseBdev2", 00:18:51.440 "uuid": "79c06479-be26-4ded-ac07-1552855d3db4", 00:18:51.440 "is_configured": true, 00:18:51.440 "data_offset": 0, 00:18:51.440 "data_size": 65536 00:18:51.440 } 00:18:51.440 ] 00:18:51.440 }' 00:18:51.440 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.440 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.697 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.697 [2024-11-08 17:06:28.366725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:51.697 [2024-11-08 17:06:28.366793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60684 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60684 ']' 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60684 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60684 00:18:51.954 killing process with pid 60684 00:18:51.954 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:51.955 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:51.955 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60684' 00:18:51.955 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60684 00:18:51.955 [2024-11-08 17:06:28.506866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.955 17:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60684 00:18:51.955 [2024-11-08 17:06:28.518024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.887 ************************************ 00:18:52.887 END TEST raid_state_function_test 00:18:52.887 ************************************ 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:52.887 00:18:52.887 real 0m3.887s 00:18:52.887 user 0m5.540s 00:18:52.887 sys 0m0.613s 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 17:06:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:18:52.887 17:06:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:52.887 17:06:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:52.887 17:06:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 ************************************ 00:18:52.887 START TEST raid_state_function_test_sb 00:18:52.887 ************************************ 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:52.887 Process raid pid: 60926 00:18:52.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60926 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60926' 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60926 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60926 ']' 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.887 17:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 [2024-11-08 17:06:29.427164] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:52.887 [2024-11-08 17:06:29.427303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.887 [2024-11-08 17:06:29.584403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.146 [2024-11-08 17:06:29.706715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.146 [2024-11-08 17:06:29.859023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.406 [2024-11-08 17:06:29.859276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.665 [2024-11-08 17:06:30.304821] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:53.665 [2024-11-08 17:06:30.304885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:53.665 [2024-11-08 17:06:30.304896] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.665 [2024-11-08 17:06:30.304907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.665 "name": "Existed_Raid", 00:18:53.665 "uuid": "ee51d9f0-cb70-44bf-980f-0d1e4d475fc0", 00:18:53.665 "strip_size_kb": 64, 00:18:53.665 "state": "configuring", 00:18:53.665 "raid_level": "concat", 00:18:53.665 "superblock": true, 00:18:53.665 "num_base_bdevs": 2, 00:18:53.665 "num_base_bdevs_discovered": 0, 00:18:53.665 "num_base_bdevs_operational": 2, 00:18:53.665 "base_bdevs_list": [ 00:18:53.665 { 00:18:53.665 "name": "BaseBdev1", 00:18:53.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.665 "is_configured": false, 00:18:53.665 "data_offset": 0, 00:18:53.665 "data_size": 0 00:18:53.665 }, 00:18:53.665 { 00:18:53.665 "name": "BaseBdev2", 00:18:53.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.665 "is_configured": false, 00:18:53.665 "data_offset": 0, 00:18:53.665 "data_size": 0 00:18:53.665 } 00:18:53.665 ] 00:18:53.665 }' 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.665 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.923 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:53.923 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.923 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.183 [2024-11-08 17:06:30.636876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:54.183 [2024-11-08 17:06:30.636917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.183 [2024-11-08 17:06:30.644860] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.183 [2024-11-08 17:06:30.644901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.183 [2024-11-08 17:06:30.644911] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.183 [2024-11-08 17:06:30.644923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.183 [2024-11-08 17:06:30.679643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.183 BaseBdev1 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.183 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.183 [ 00:18:54.183 { 00:18:54.183 "name": "BaseBdev1", 00:18:54.183 "aliases": [ 00:18:54.183 "562e647d-1427-4506-a380-10c7f768692b" 00:18:54.183 ], 00:18:54.183 "product_name": "Malloc disk", 00:18:54.183 "block_size": 512, 00:18:54.183 "num_blocks": 65536, 00:18:54.183 "uuid": "562e647d-1427-4506-a380-10c7f768692b", 00:18:54.183 "assigned_rate_limits": { 00:18:54.183 "rw_ios_per_sec": 0, 00:18:54.183 "rw_mbytes_per_sec": 0, 00:18:54.183 "r_mbytes_per_sec": 0, 00:18:54.183 "w_mbytes_per_sec": 0 00:18:54.183 }, 00:18:54.183 "claimed": true, 00:18:54.183 "claim_type": "exclusive_write", 00:18:54.183 "zoned": false, 00:18:54.183 "supported_io_types": { 00:18:54.183 "read": true, 00:18:54.183 "write": true, 00:18:54.183 "unmap": true, 00:18:54.183 "flush": true, 00:18:54.183 "reset": true, 00:18:54.183 "nvme_admin": false, 00:18:54.183 "nvme_io": false, 00:18:54.184 "nvme_io_md": false, 00:18:54.184 "write_zeroes": true, 00:18:54.184 "zcopy": true, 00:18:54.184 "get_zone_info": false, 00:18:54.184 "zone_management": false, 00:18:54.184 "zone_append": false, 00:18:54.184 "compare": false, 00:18:54.184 "compare_and_write": false, 00:18:54.184 "abort": true, 00:18:54.184 "seek_hole": false, 00:18:54.184 "seek_data": false, 00:18:54.184 "copy": true, 00:18:54.184 "nvme_iov_md": false 00:18:54.184 }, 00:18:54.184 "memory_domains": [ 00:18:54.184 { 00:18:54.184 "dma_device_id": "system", 00:18:54.184 "dma_device_type": 1 00:18:54.184 }, 00:18:54.184 { 00:18:54.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.184 "dma_device_type": 2 00:18:54.184 } 00:18:54.184 ], 00:18:54.184 "driver_specific": {} 00:18:54.184 } 00:18:54.184 ] 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.184 "name": "Existed_Raid", 00:18:54.184 "uuid": "66a540e8-a7cf-48f2-ab59-d2d268955d1a", 00:18:54.184 "strip_size_kb": 64, 00:18:54.184 "state": "configuring", 00:18:54.184 "raid_level": "concat", 00:18:54.184 "superblock": true, 00:18:54.184 "num_base_bdevs": 2, 00:18:54.184 "num_base_bdevs_discovered": 1, 00:18:54.184 "num_base_bdevs_operational": 2, 00:18:54.184 "base_bdevs_list": [ 00:18:54.184 { 00:18:54.184 "name": "BaseBdev1", 00:18:54.184 "uuid": "562e647d-1427-4506-a380-10c7f768692b", 00:18:54.184 "is_configured": true, 00:18:54.184 "data_offset": 2048, 00:18:54.184 "data_size": 63488 00:18:54.184 }, 00:18:54.184 { 00:18:54.184 "name": "BaseBdev2", 00:18:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.184 "is_configured": false, 00:18:54.184 "data_offset": 0, 00:18:54.184 "data_size": 0 00:18:54.184 } 00:18:54.184 ] 00:18:54.184 }' 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.184 17:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.444 [2024-11-08 17:06:31.031806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:54.444 [2024-11-08 17:06:31.031865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.444 [2024-11-08 17:06:31.039858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.444 [2024-11-08 17:06:31.041858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.444 [2024-11-08 17:06:31.041903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.444 "name": "Existed_Raid", 00:18:54.444 "uuid": "2e62c699-c8ad-4bab-b50e-8ab314365a43", 00:18:54.444 "strip_size_kb": 64, 00:18:54.444 "state": "configuring", 00:18:54.444 "raid_level": "concat", 00:18:54.444 "superblock": true, 00:18:54.444 "num_base_bdevs": 2, 00:18:54.444 "num_base_bdevs_discovered": 1, 00:18:54.444 "num_base_bdevs_operational": 2, 00:18:54.444 "base_bdevs_list": [ 00:18:54.444 { 00:18:54.444 "name": "BaseBdev1", 00:18:54.444 "uuid": "562e647d-1427-4506-a380-10c7f768692b", 00:18:54.444 "is_configured": true, 00:18:54.444 "data_offset": 2048, 00:18:54.444 "data_size": 63488 00:18:54.444 }, 00:18:54.444 { 00:18:54.444 "name": "BaseBdev2", 00:18:54.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.444 "is_configured": false, 00:18:54.444 "data_offset": 0, 00:18:54.444 "data_size": 0 00:18:54.444 } 00:18:54.444 ] 00:18:54.444 }' 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.444 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.704 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:54.704 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.704 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.704 [2024-11-08 17:06:31.396786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.704 [2024-11-08 17:06:31.397055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:54.704 [2024-11-08 17:06:31.397069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:54.704 [2024-11-08 17:06:31.397348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:54.704 [2024-11-08 17:06:31.397488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:54.704 [2024-11-08 17:06:31.397500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:54.705 [2024-11-08 17:06:31.397634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.705 BaseBdev2 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.705 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.962 [ 00:18:54.962 { 00:18:54.962 "name": "BaseBdev2", 00:18:54.962 "aliases": [ 00:18:54.962 "f5fe320f-c85f-4a20-8817-ad842165a96c" 00:18:54.962 ], 00:18:54.962 "product_name": "Malloc disk", 00:18:54.962 "block_size": 512, 00:18:54.962 "num_blocks": 65536, 00:18:54.962 "uuid": "f5fe320f-c85f-4a20-8817-ad842165a96c", 00:18:54.962 "assigned_rate_limits": { 00:18:54.962 "rw_ios_per_sec": 0, 00:18:54.962 "rw_mbytes_per_sec": 0, 00:18:54.962 "r_mbytes_per_sec": 0, 00:18:54.962 "w_mbytes_per_sec": 0 00:18:54.962 }, 00:18:54.962 "claimed": true, 00:18:54.962 "claim_type": "exclusive_write", 00:18:54.962 "zoned": false, 00:18:54.962 "supported_io_types": { 00:18:54.962 "read": true, 00:18:54.962 "write": true, 00:18:54.962 "unmap": true, 00:18:54.962 "flush": true, 00:18:54.962 "reset": true, 00:18:54.962 "nvme_admin": false, 00:18:54.962 "nvme_io": false, 00:18:54.962 "nvme_io_md": false, 00:18:54.962 "write_zeroes": true, 00:18:54.962 "zcopy": true, 00:18:54.962 "get_zone_info": false, 00:18:54.962 "zone_management": false, 00:18:54.962 "zone_append": false, 00:18:54.962 "compare": false, 00:18:54.962 "compare_and_write": false, 00:18:54.962 "abort": true, 00:18:54.962 "seek_hole": false, 00:18:54.962 "seek_data": false, 00:18:54.962 "copy": true, 00:18:54.962 "nvme_iov_md": false 00:18:54.962 }, 00:18:54.962 "memory_domains": [ 00:18:54.962 { 00:18:54.962 "dma_device_id": "system", 00:18:54.962 "dma_device_type": 1 00:18:54.962 }, 00:18:54.962 { 00:18:54.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.962 "dma_device_type": 2 00:18:54.962 } 00:18:54.962 ], 00:18:54.962 "driver_specific": {} 00:18:54.962 } 00:18:54.962 ] 00:18:54.962 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.962 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:54.962 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:54.962 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:54.962 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.963 "name": "Existed_Raid", 00:18:54.963 "uuid": "2e62c699-c8ad-4bab-b50e-8ab314365a43", 00:18:54.963 "strip_size_kb": 64, 00:18:54.963 "state": "online", 00:18:54.963 "raid_level": "concat", 00:18:54.963 "superblock": true, 00:18:54.963 "num_base_bdevs": 2, 00:18:54.963 "num_base_bdevs_discovered": 2, 00:18:54.963 "num_base_bdevs_operational": 2, 00:18:54.963 "base_bdevs_list": [ 00:18:54.963 { 00:18:54.963 "name": "BaseBdev1", 00:18:54.963 "uuid": "562e647d-1427-4506-a380-10c7f768692b", 00:18:54.963 "is_configured": true, 00:18:54.963 "data_offset": 2048, 00:18:54.963 "data_size": 63488 00:18:54.963 }, 00:18:54.963 { 00:18:54.963 "name": "BaseBdev2", 00:18:54.963 "uuid": "f5fe320f-c85f-4a20-8817-ad842165a96c", 00:18:54.963 "is_configured": true, 00:18:54.963 "data_offset": 2048, 00:18:54.963 "data_size": 63488 00:18:54.963 } 00:18:54.963 ] 00:18:54.963 }' 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.963 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.219 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:55.219 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:55.219 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:55.219 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.220 [2024-11-08 17:06:31.741215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.220 "name": "Existed_Raid", 00:18:55.220 "aliases": [ 00:18:55.220 "2e62c699-c8ad-4bab-b50e-8ab314365a43" 00:18:55.220 ], 00:18:55.220 "product_name": "Raid Volume", 00:18:55.220 "block_size": 512, 00:18:55.220 "num_blocks": 126976, 00:18:55.220 "uuid": "2e62c699-c8ad-4bab-b50e-8ab314365a43", 00:18:55.220 "assigned_rate_limits": { 00:18:55.220 "rw_ios_per_sec": 0, 00:18:55.220 "rw_mbytes_per_sec": 0, 00:18:55.220 "r_mbytes_per_sec": 0, 00:18:55.220 "w_mbytes_per_sec": 0 00:18:55.220 }, 00:18:55.220 "claimed": false, 00:18:55.220 "zoned": false, 00:18:55.220 "supported_io_types": { 00:18:55.220 "read": true, 00:18:55.220 "write": true, 00:18:55.220 "unmap": true, 00:18:55.220 "flush": true, 00:18:55.220 "reset": true, 00:18:55.220 "nvme_admin": false, 00:18:55.220 "nvme_io": false, 00:18:55.220 "nvme_io_md": false, 00:18:55.220 "write_zeroes": true, 00:18:55.220 "zcopy": false, 00:18:55.220 "get_zone_info": false, 00:18:55.220 "zone_management": false, 00:18:55.220 "zone_append": false, 00:18:55.220 "compare": false, 00:18:55.220 "compare_and_write": false, 00:18:55.220 "abort": false, 00:18:55.220 "seek_hole": false, 00:18:55.220 "seek_data": false, 00:18:55.220 "copy": false, 00:18:55.220 "nvme_iov_md": false 00:18:55.220 }, 00:18:55.220 "memory_domains": [ 00:18:55.220 { 00:18:55.220 "dma_device_id": "system", 00:18:55.220 "dma_device_type": 1 00:18:55.220 }, 00:18:55.220 { 00:18:55.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.220 "dma_device_type": 2 00:18:55.220 }, 00:18:55.220 { 00:18:55.220 "dma_device_id": "system", 00:18:55.220 "dma_device_type": 1 00:18:55.220 }, 00:18:55.220 { 00:18:55.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.220 "dma_device_type": 2 00:18:55.220 } 00:18:55.220 ], 00:18:55.220 "driver_specific": { 00:18:55.220 "raid": { 00:18:55.220 "uuid": "2e62c699-c8ad-4bab-b50e-8ab314365a43", 00:18:55.220 "strip_size_kb": 64, 00:18:55.220 "state": "online", 00:18:55.220 "raid_level": "concat", 00:18:55.220 "superblock": true, 00:18:55.220 "num_base_bdevs": 2, 00:18:55.220 "num_base_bdevs_discovered": 2, 00:18:55.220 "num_base_bdevs_operational": 2, 00:18:55.220 "base_bdevs_list": [ 00:18:55.220 { 00:18:55.220 "name": "BaseBdev1", 00:18:55.220 "uuid": "562e647d-1427-4506-a380-10c7f768692b", 00:18:55.220 "is_configured": true, 00:18:55.220 "data_offset": 2048, 00:18:55.220 "data_size": 63488 00:18:55.220 }, 00:18:55.220 { 00:18:55.220 "name": "BaseBdev2", 00:18:55.220 "uuid": "f5fe320f-c85f-4a20-8817-ad842165a96c", 00:18:55.220 "is_configured": true, 00:18:55.220 "data_offset": 2048, 00:18:55.220 "data_size": 63488 00:18:55.220 } 00:18:55.220 ] 00:18:55.220 } 00:18:55.220 } 00:18:55.220 }' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:55.220 BaseBdev2' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.220 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.220 [2024-11-08 17:06:31.908993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.220 [2024-11-08 17:06:31.909029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.220 [2024-11-08 17:06:31.909087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.479 17:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.479 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.479 "name": "Existed_Raid", 00:18:55.479 "uuid": "2e62c699-c8ad-4bab-b50e-8ab314365a43", 00:18:55.479 "strip_size_kb": 64, 00:18:55.479 "state": "offline", 00:18:55.479 "raid_level": "concat", 00:18:55.479 "superblock": true, 00:18:55.479 "num_base_bdevs": 2, 00:18:55.479 "num_base_bdevs_discovered": 1, 00:18:55.479 "num_base_bdevs_operational": 1, 00:18:55.479 "base_bdevs_list": [ 00:18:55.479 { 00:18:55.479 "name": null, 00:18:55.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.479 "is_configured": false, 00:18:55.479 "data_offset": 0, 00:18:55.479 "data_size": 63488 00:18:55.479 }, 00:18:55.479 { 00:18:55.479 "name": "BaseBdev2", 00:18:55.479 "uuid": "f5fe320f-c85f-4a20-8817-ad842165a96c", 00:18:55.479 "is_configured": true, 00:18:55.479 "data_offset": 2048, 00:18:55.479 "data_size": 63488 00:18:55.479 } 00:18:55.479 ] 00:18:55.479 }' 00:18:55.479 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.479 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.739 [2024-11-08 17:06:32.364321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:55.739 [2024-11-08 17:06:32.364383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.739 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.999 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:55.999 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:55.999 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:55.999 17:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60926 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60926 ']' 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60926 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60926 00:18:56.000 killing process with pid 60926 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60926' 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60926 00:18:56.000 17:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60926 00:18:56.000 [2024-11-08 17:06:32.493500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.000 [2024-11-08 17:06:32.504687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.578 ************************************ 00:18:56.578 END TEST raid_state_function_test_sb 00:18:56.578 ************************************ 00:18:56.578 17:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:56.578 00:18:56.578 real 0m3.902s 00:18:56.578 user 0m5.594s 00:18:56.578 sys 0m0.609s 00:18:56.578 17:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:56.578 17:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.838 17:06:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:18:56.838 17:06:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:56.838 17:06:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:56.838 17:06:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.838 ************************************ 00:18:56.838 START TEST raid_superblock_test 00:18:56.838 ************************************ 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:56.838 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61162 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61162 00:18:56.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61162 ']' 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:56.839 17:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.839 [2024-11-08 17:06:33.397636] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:18:56.839 [2024-11-08 17:06:33.397805] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:18:57.100 [2024-11-08 17:06:33.561392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.100 [2024-11-08 17:06:33.681513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.359 [2024-11-08 17:06:33.829266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.359 [2024-11-08 17:06:33.829333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.618 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.619 malloc1 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.619 [2024-11-08 17:06:34.300310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:57.619 [2024-11-08 17:06:34.300385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.619 [2024-11-08 17:06:34.300408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:57.619 [2024-11-08 17:06:34.300419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.619 [2024-11-08 17:06:34.302886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.619 [2024-11-08 17:06:34.303035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:57.619 pt1 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.619 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.877 malloc2 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.877 [2024-11-08 17:06:34.343770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:57.877 [2024-11-08 17:06:34.343831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.877 [2024-11-08 17:06:34.343852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:57.877 [2024-11-08 17:06:34.343862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.877 [2024-11-08 17:06:34.346207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.877 [2024-11-08 17:06:34.346365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:57.877 pt2 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.877 [2024-11-08 17:06:34.355849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:57.877 [2024-11-08 17:06:34.357958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:57.877 [2024-11-08 17:06:34.358127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:57.877 [2024-11-08 17:06:34.358139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:57.877 [2024-11-08 17:06:34.358437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:57.877 [2024-11-08 17:06:34.358594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:57.877 [2024-11-08 17:06:34.358606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:57.877 [2024-11-08 17:06:34.358777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.877 "name": "raid_bdev1", 00:18:57.877 "uuid": "46fc986c-d437-4ed0-be3f-fac518c02bef", 00:18:57.877 "strip_size_kb": 64, 00:18:57.877 "state": "online", 00:18:57.877 "raid_level": "concat", 00:18:57.877 "superblock": true, 00:18:57.877 "num_base_bdevs": 2, 00:18:57.877 "num_base_bdevs_discovered": 2, 00:18:57.877 "num_base_bdevs_operational": 2, 00:18:57.877 "base_bdevs_list": [ 00:18:57.877 { 00:18:57.877 "name": "pt1", 00:18:57.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.877 "is_configured": true, 00:18:57.877 "data_offset": 2048, 00:18:57.877 "data_size": 63488 00:18:57.877 }, 00:18:57.877 { 00:18:57.877 "name": "pt2", 00:18:57.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.877 "is_configured": true, 00:18:57.877 "data_offset": 2048, 00:18:57.877 "data_size": 63488 00:18:57.877 } 00:18:57.877 ] 00:18:57.877 }' 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.877 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.138 [2024-11-08 17:06:34.692216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.138 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:58.138 "name": "raid_bdev1", 00:18:58.138 "aliases": [ 00:18:58.138 "46fc986c-d437-4ed0-be3f-fac518c02bef" 00:18:58.138 ], 00:18:58.138 "product_name": "Raid Volume", 00:18:58.138 "block_size": 512, 00:18:58.138 "num_blocks": 126976, 00:18:58.138 "uuid": "46fc986c-d437-4ed0-be3f-fac518c02bef", 00:18:58.138 "assigned_rate_limits": { 00:18:58.138 "rw_ios_per_sec": 0, 00:18:58.138 "rw_mbytes_per_sec": 0, 00:18:58.138 "r_mbytes_per_sec": 0, 00:18:58.138 "w_mbytes_per_sec": 0 00:18:58.138 }, 00:18:58.138 "claimed": false, 00:18:58.138 "zoned": false, 00:18:58.138 "supported_io_types": { 00:18:58.138 "read": true, 00:18:58.138 "write": true, 00:18:58.138 "unmap": true, 00:18:58.139 "flush": true, 00:18:58.139 "reset": true, 00:18:58.139 "nvme_admin": false, 00:18:58.139 "nvme_io": false, 00:18:58.139 "nvme_io_md": false, 00:18:58.139 "write_zeroes": true, 00:18:58.139 "zcopy": false, 00:18:58.139 "get_zone_info": false, 00:18:58.139 "zone_management": false, 00:18:58.139 "zone_append": false, 00:18:58.139 "compare": false, 00:18:58.139 "compare_and_write": false, 00:18:58.139 "abort": false, 00:18:58.139 "seek_hole": false, 00:18:58.139 "seek_data": false, 00:18:58.139 "copy": false, 00:18:58.139 "nvme_iov_md": false 00:18:58.139 }, 00:18:58.139 "memory_domains": [ 00:18:58.139 { 00:18:58.139 "dma_device_id": "system", 00:18:58.139 "dma_device_type": 1 00:18:58.139 }, 00:18:58.139 { 00:18:58.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.139 "dma_device_type": 2 00:18:58.139 }, 00:18:58.139 { 00:18:58.139 "dma_device_id": "system", 00:18:58.139 "dma_device_type": 1 00:18:58.139 }, 00:18:58.139 { 00:18:58.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.139 "dma_device_type": 2 00:18:58.139 } 00:18:58.139 ], 00:18:58.139 "driver_specific": { 00:18:58.139 "raid": { 00:18:58.139 "uuid": "46fc986c-d437-4ed0-be3f-fac518c02bef", 00:18:58.139 "strip_size_kb": 64, 00:18:58.139 "state": "online", 00:18:58.139 "raid_level": "concat", 00:18:58.139 "superblock": true, 00:18:58.139 "num_base_bdevs": 2, 00:18:58.139 "num_base_bdevs_discovered": 2, 00:18:58.139 "num_base_bdevs_operational": 2, 00:18:58.139 "base_bdevs_list": [ 00:18:58.139 { 00:18:58.139 "name": "pt1", 00:18:58.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.139 "is_configured": true, 00:18:58.139 "data_offset": 2048, 00:18:58.139 "data_size": 63488 00:18:58.139 }, 00:18:58.139 { 00:18:58.139 "name": "pt2", 00:18:58.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.139 "is_configured": true, 00:18:58.139 "data_offset": 2048, 00:18:58.139 "data_size": 63488 00:18:58.139 } 00:18:58.139 ] 00:18:58.139 } 00:18:58.139 } 00:18:58.139 }' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:58.139 pt2' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.139 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.399 [2024-11-08 17:06:34.868254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=46fc986c-d437-4ed0-be3f-fac518c02bef 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 46fc986c-d437-4ed0-be3f-fac518c02bef ']' 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.399 [2024-11-08 17:06:34.891899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.399 [2024-11-08 17:06:34.891932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.399 [2024-11-08 17:06:34.892037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.399 [2024-11-08 17:06:34.892095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.399 [2024-11-08 17:06:34.892108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.399 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 [2024-11-08 17:06:34.995984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:58.400 [2024-11-08 17:06:34.998170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:58.400 [2024-11-08 17:06:34.998251] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:58.400 [2024-11-08 17:06:34.998312] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:58.400 [2024-11-08 17:06:34.998328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.400 [2024-11-08 17:06:34.998340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:58.400 request: 00:18:58.400 { 00:18:58.400 "name": "raid_bdev1", 00:18:58.400 "raid_level": "concat", 00:18:58.400 "base_bdevs": [ 00:18:58.400 "malloc1", 00:18:58.400 "malloc2" 00:18:58.400 ], 00:18:58.400 "strip_size_kb": 64, 00:18:58.400 "superblock": false, 00:18:58.400 "method": "bdev_raid_create", 00:18:58.400 "req_id": 1 00:18:58.400 } 00:18:58.400 Got JSON-RPC error response 00:18:58.400 response: 00:18:58.400 { 00:18:58.400 "code": -17, 00:18:58.400 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:58.400 } 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 [2024-11-08 17:06:35.039966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.400 [2024-11-08 17:06:35.040043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.400 [2024-11-08 17:06:35.040069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:58.400 [2024-11-08 17:06:35.040082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.400 [2024-11-08 17:06:35.042614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.400 [2024-11-08 17:06:35.042784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.400 [2024-11-08 17:06:35.042898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:58.400 [2024-11-08 17:06:35.042967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.400 pt1 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.400 "name": "raid_bdev1", 00:18:58.400 "uuid": "46fc986c-d437-4ed0-be3f-fac518c02bef", 00:18:58.400 "strip_size_kb": 64, 00:18:58.400 "state": "configuring", 00:18:58.400 "raid_level": "concat", 00:18:58.400 "superblock": true, 00:18:58.400 "num_base_bdevs": 2, 00:18:58.400 "num_base_bdevs_discovered": 1, 00:18:58.400 "num_base_bdevs_operational": 2, 00:18:58.400 "base_bdevs_list": [ 00:18:58.400 { 00:18:58.400 "name": "pt1", 00:18:58.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.400 "is_configured": true, 00:18:58.400 "data_offset": 2048, 00:18:58.400 "data_size": 63488 00:18:58.400 }, 00:18:58.400 { 00:18:58.400 "name": null, 00:18:58.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.400 "is_configured": false, 00:18:58.400 "data_offset": 2048, 00:18:58.400 "data_size": 63488 00:18:58.400 } 00:18:58.400 ] 00:18:58.400 }' 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.400 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.969 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:58.969 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:58.969 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:58.969 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.969 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.970 [2024-11-08 17:06:35.396082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.970 [2024-11-08 17:06:35.396173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.970 [2024-11-08 17:06:35.396197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:58.970 [2024-11-08 17:06:35.396209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.970 [2024-11-08 17:06:35.396749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.970 [2024-11-08 17:06:35.396787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.970 [2024-11-08 17:06:35.396886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:58.970 [2024-11-08 17:06:35.396915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.970 [2024-11-08 17:06:35.397043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:58.970 [2024-11-08 17:06:35.397056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:58.970 [2024-11-08 17:06:35.397314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:58.970 [2024-11-08 17:06:35.397462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:58.970 [2024-11-08 17:06:35.397471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:58.970 [2024-11-08 17:06:35.397612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.970 pt2 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.970 "name": "raid_bdev1", 00:18:58.970 "uuid": "46fc986c-d437-4ed0-be3f-fac518c02bef", 00:18:58.970 "strip_size_kb": 64, 00:18:58.970 "state": "online", 00:18:58.970 "raid_level": "concat", 00:18:58.970 "superblock": true, 00:18:58.970 "num_base_bdevs": 2, 00:18:58.970 "num_base_bdevs_discovered": 2, 00:18:58.970 "num_base_bdevs_operational": 2, 00:18:58.970 "base_bdevs_list": [ 00:18:58.970 { 00:18:58.970 "name": "pt1", 00:18:58.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.970 "is_configured": true, 00:18:58.970 "data_offset": 2048, 00:18:58.970 "data_size": 63488 00:18:58.970 }, 00:18:58.970 { 00:18:58.970 "name": "pt2", 00:18:58.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.970 "is_configured": true, 00:18:58.970 "data_offset": 2048, 00:18:58.970 "data_size": 63488 00:18:58.970 } 00:18:58.970 ] 00:18:58.970 }' 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.970 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.229 [2024-11-08 17:06:35.732422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.229 "name": "raid_bdev1", 00:18:59.229 "aliases": [ 00:18:59.229 "46fc986c-d437-4ed0-be3f-fac518c02bef" 00:18:59.229 ], 00:18:59.229 "product_name": "Raid Volume", 00:18:59.229 "block_size": 512, 00:18:59.229 "num_blocks": 126976, 00:18:59.229 "uuid": "46fc986c-d437-4ed0-be3f-fac518c02bef", 00:18:59.229 "assigned_rate_limits": { 00:18:59.229 "rw_ios_per_sec": 0, 00:18:59.229 "rw_mbytes_per_sec": 0, 00:18:59.229 "r_mbytes_per_sec": 0, 00:18:59.229 "w_mbytes_per_sec": 0 00:18:59.229 }, 00:18:59.229 "claimed": false, 00:18:59.229 "zoned": false, 00:18:59.229 "supported_io_types": { 00:18:59.229 "read": true, 00:18:59.229 "write": true, 00:18:59.229 "unmap": true, 00:18:59.229 "flush": true, 00:18:59.229 "reset": true, 00:18:59.229 "nvme_admin": false, 00:18:59.229 "nvme_io": false, 00:18:59.229 "nvme_io_md": false, 00:18:59.229 "write_zeroes": true, 00:18:59.229 "zcopy": false, 00:18:59.229 "get_zone_info": false, 00:18:59.229 "zone_management": false, 00:18:59.229 "zone_append": false, 00:18:59.229 "compare": false, 00:18:59.229 "compare_and_write": false, 00:18:59.229 "abort": false, 00:18:59.229 "seek_hole": false, 00:18:59.229 "seek_data": false, 00:18:59.229 "copy": false, 00:18:59.229 "nvme_iov_md": false 00:18:59.229 }, 00:18:59.229 "memory_domains": [ 00:18:59.229 { 00:18:59.229 "dma_device_id": "system", 00:18:59.229 "dma_device_type": 1 00:18:59.229 }, 00:18:59.229 { 00:18:59.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.229 "dma_device_type": 2 00:18:59.229 }, 00:18:59.229 { 00:18:59.229 "dma_device_id": "system", 00:18:59.229 "dma_device_type": 1 00:18:59.229 }, 00:18:59.229 { 00:18:59.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.229 "dma_device_type": 2 00:18:59.229 } 00:18:59.229 ], 00:18:59.229 "driver_specific": { 00:18:59.229 "raid": { 00:18:59.229 "uuid": "46fc986c-d437-4ed0-be3f-fac518c02bef", 00:18:59.229 "strip_size_kb": 64, 00:18:59.229 "state": "online", 00:18:59.229 "raid_level": "concat", 00:18:59.229 "superblock": true, 00:18:59.229 "num_base_bdevs": 2, 00:18:59.229 "num_base_bdevs_discovered": 2, 00:18:59.229 "num_base_bdevs_operational": 2, 00:18:59.229 "base_bdevs_list": [ 00:18:59.229 { 00:18:59.229 "name": "pt1", 00:18:59.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.229 "is_configured": true, 00:18:59.229 "data_offset": 2048, 00:18:59.229 "data_size": 63488 00:18:59.229 }, 00:18:59.229 { 00:18:59.229 "name": "pt2", 00:18:59.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.229 "is_configured": true, 00:18:59.229 "data_offset": 2048, 00:18:59.229 "data_size": 63488 00:18:59.229 } 00:18:59.229 ] 00:18:59.229 } 00:18:59.229 } 00:18:59.229 }' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:59.229 pt2' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.229 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.230 [2024-11-08 17:06:35.888463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 46fc986c-d437-4ed0-be3f-fac518c02bef '!=' 46fc986c-d437-4ed0-be3f-fac518c02bef ']' 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61162 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61162 ']' 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61162 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:59.230 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61162 00:18:59.491 killing process with pid 61162 00:18:59.491 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:59.491 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:59.491 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61162' 00:18:59.491 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61162 00:18:59.491 [2024-11-08 17:06:35.944783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.491 17:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61162 00:18:59.491 [2024-11-08 17:06:35.944908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.491 [2024-11-08 17:06:35.944976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.491 [2024-11-08 17:06:35.944990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:59.491 [2024-11-08 17:06:36.084708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.428 17:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:00.428 00:19:00.428 real 0m3.528s 00:19:00.428 user 0m4.857s 00:19:00.428 sys 0m0.594s 00:19:00.428 17:06:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:00.428 17:06:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.428 ************************************ 00:19:00.428 END TEST raid_superblock_test 00:19:00.429 ************************************ 00:19:00.429 17:06:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:19:00.429 17:06:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:00.429 17:06:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:00.429 17:06:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.429 ************************************ 00:19:00.429 START TEST raid_read_error_test 00:19:00.429 ************************************ 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OvaCXOCwTv 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61362 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61362 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61362 ']' 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:00.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:00.429 17:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.429 [2024-11-08 17:06:36.997021] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:00.429 [2024-11-08 17:06:36.997162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61362 ] 00:19:00.689 [2024-11-08 17:06:37.159099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.689 [2024-11-08 17:06:37.279247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.946 [2024-11-08 17:06:37.435325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.946 [2024-11-08 17:06:37.435381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.206 BaseBdev1_malloc 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.206 true 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.206 [2024-11-08 17:06:37.891397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:01.206 [2024-11-08 17:06:37.891459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.206 [2024-11-08 17:06:37.891479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:01.206 [2024-11-08 17:06:37.891490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.206 [2024-11-08 17:06:37.893802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.206 [2024-11-08 17:06:37.893838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:01.206 BaseBdev1 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.206 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.466 BaseBdev2_malloc 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.466 true 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.466 [2024-11-08 17:06:37.937747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:01.466 [2024-11-08 17:06:37.937823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.466 [2024-11-08 17:06:37.937841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:01.466 [2024-11-08 17:06:37.937852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.466 [2024-11-08 17:06:37.940139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.466 [2024-11-08 17:06:37.940177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:01.466 BaseBdev2 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.466 [2024-11-08 17:06:37.945848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.466 [2024-11-08 17:06:37.947853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.466 [2024-11-08 17:06:37.948057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:01.466 [2024-11-08 17:06:37.948071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:01.466 [2024-11-08 17:06:37.948334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:01.466 [2024-11-08 17:06:37.948504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:01.466 [2024-11-08 17:06:37.948514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:01.466 [2024-11-08 17:06:37.948669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.466 17:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.466 17:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.466 "name": "raid_bdev1", 00:19:01.466 "uuid": "12946962-fac1-4940-be57-7c332b4af727", 00:19:01.466 "strip_size_kb": 64, 00:19:01.466 "state": "online", 00:19:01.466 "raid_level": "concat", 00:19:01.466 "superblock": true, 00:19:01.466 "num_base_bdevs": 2, 00:19:01.466 "num_base_bdevs_discovered": 2, 00:19:01.466 "num_base_bdevs_operational": 2, 00:19:01.466 "base_bdevs_list": [ 00:19:01.466 { 00:19:01.466 "name": "BaseBdev1", 00:19:01.466 "uuid": "51bd882e-06db-50e8-9875-593a132a59d0", 00:19:01.466 "is_configured": true, 00:19:01.466 "data_offset": 2048, 00:19:01.466 "data_size": 63488 00:19:01.466 }, 00:19:01.466 { 00:19:01.466 "name": "BaseBdev2", 00:19:01.466 "uuid": "c1c598cb-d3eb-54ae-8ba0-a96c0df1cb57", 00:19:01.466 "is_configured": true, 00:19:01.466 "data_offset": 2048, 00:19:01.466 "data_size": 63488 00:19:01.466 } 00:19:01.466 ] 00:19:01.466 }' 00:19:01.466 17:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.466 17:06:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.727 17:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:01.727 17:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:01.727 [2024-11-08 17:06:38.387946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.668 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.669 "name": "raid_bdev1", 00:19:02.669 "uuid": "12946962-fac1-4940-be57-7c332b4af727", 00:19:02.669 "strip_size_kb": 64, 00:19:02.669 "state": "online", 00:19:02.669 "raid_level": "concat", 00:19:02.669 "superblock": true, 00:19:02.669 "num_base_bdevs": 2, 00:19:02.669 "num_base_bdevs_discovered": 2, 00:19:02.669 "num_base_bdevs_operational": 2, 00:19:02.669 "base_bdevs_list": [ 00:19:02.669 { 00:19:02.669 "name": "BaseBdev1", 00:19:02.669 "uuid": "51bd882e-06db-50e8-9875-593a132a59d0", 00:19:02.669 "is_configured": true, 00:19:02.669 "data_offset": 2048, 00:19:02.669 "data_size": 63488 00:19:02.669 }, 00:19:02.669 { 00:19:02.669 "name": "BaseBdev2", 00:19:02.669 "uuid": "c1c598cb-d3eb-54ae-8ba0-a96c0df1cb57", 00:19:02.669 "is_configured": true, 00:19:02.669 "data_offset": 2048, 00:19:02.669 "data_size": 63488 00:19:02.669 } 00:19:02.669 ] 00:19:02.669 }' 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.669 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.929 [2024-11-08 17:06:39.610104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.929 [2024-11-08 17:06:39.610142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.929 [2024-11-08 17:06:39.613296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.929 { 00:19:02.929 "results": [ 00:19:02.929 { 00:19:02.929 "job": "raid_bdev1", 00:19:02.929 "core_mask": "0x1", 00:19:02.929 "workload": "randrw", 00:19:02.929 "percentage": 50, 00:19:02.929 "status": "finished", 00:19:02.929 "queue_depth": 1, 00:19:02.929 "io_size": 131072, 00:19:02.929 "runtime": 1.220091, 00:19:02.929 "iops": 13553.087433642244, 00:19:02.929 "mibps": 1694.1359292052805, 00:19:02.929 "io_failed": 1, 00:19:02.929 "io_timeout": 0, 00:19:02.929 "avg_latency_us": 101.71760443946211, 00:19:02.929 "min_latency_us": 33.28, 00:19:02.929 "max_latency_us": 1751.8276923076924 00:19:02.929 } 00:19:02.929 ], 00:19:02.929 "core_count": 1 00:19:02.929 } 00:19:02.929 [2024-11-08 17:06:39.613462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.929 [2024-11-08 17:06:39.613510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.929 [2024-11-08 17:06:39.613524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61362 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61362 ']' 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61362 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.929 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61362 00:19:03.190 killing process with pid 61362 00:19:03.190 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:03.190 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:03.190 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61362' 00:19:03.190 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61362 00:19:03.190 17:06:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61362 00:19:03.190 [2024-11-08 17:06:39.644549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.190 [2024-11-08 17:06:39.736676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OvaCXOCwTv 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:19:04.130 00:19:04.130 real 0m3.687s 00:19:04.130 user 0m4.353s 00:19:04.130 sys 0m0.418s 00:19:04.130 ************************************ 00:19:04.130 END TEST raid_read_error_test 00:19:04.130 ************************************ 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:04.130 17:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.130 17:06:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:19:04.130 17:06:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:04.130 17:06:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:04.130 17:06:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.130 ************************************ 00:19:04.130 START TEST raid_write_error_test 00:19:04.130 ************************************ 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kR77SQPioG 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61497 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61497 00:19:04.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61497 ']' 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.130 17:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:04.130 [2024-11-08 17:06:40.751159] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:04.130 [2024-11-08 17:06:40.751349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:19:04.390 [2024-11-08 17:06:40.919055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.390 [2024-11-08 17:06:41.040924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.649 [2024-11-08 17:06:41.190494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.649 [2024-11-08 17:06:41.190718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.910 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.910 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:04.910 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:04.910 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:04.910 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.910 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.202 BaseBdev1_malloc 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.202 true 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.202 [2024-11-08 17:06:41.653877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:05.202 [2024-11-08 17:06:41.653943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.202 [2024-11-08 17:06:41.653966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:05.202 [2024-11-08 17:06:41.653978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.202 [2024-11-08 17:06:41.656326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.202 [2024-11-08 17:06:41.656481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.202 BaseBdev1 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.202 BaseBdev2_malloc 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.202 true 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.202 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.203 [2024-11-08 17:06:41.708698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:05.203 [2024-11-08 17:06:41.708882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.203 [2024-11-08 17:06:41.708926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:05.203 [2024-11-08 17:06:41.708989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.203 [2024-11-08 17:06:41.711334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.203 [2024-11-08 17:06:41.711456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:05.203 BaseBdev2 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.203 [2024-11-08 17:06:41.716778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.203 [2024-11-08 17:06:41.718887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.203 [2024-11-08 17:06:41.719103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:05.203 [2024-11-08 17:06:41.719174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:05.203 [2024-11-08 17:06:41.719453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:05.203 [2024-11-08 17:06:41.719637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:05.203 [2024-11-08 17:06:41.719667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:05.203 [2024-11-08 17:06:41.719982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.203 "name": "raid_bdev1", 00:19:05.203 "uuid": "ae74de7f-1189-4dc7-95a8-903b71af8ebf", 00:19:05.203 "strip_size_kb": 64, 00:19:05.203 "state": "online", 00:19:05.203 "raid_level": "concat", 00:19:05.203 "superblock": true, 00:19:05.203 "num_base_bdevs": 2, 00:19:05.203 "num_base_bdevs_discovered": 2, 00:19:05.203 "num_base_bdevs_operational": 2, 00:19:05.203 "base_bdevs_list": [ 00:19:05.203 { 00:19:05.203 "name": "BaseBdev1", 00:19:05.203 "uuid": "6f481fe5-4bb9-5855-8dc7-f8fda6a75b55", 00:19:05.203 "is_configured": true, 00:19:05.203 "data_offset": 2048, 00:19:05.203 "data_size": 63488 00:19:05.203 }, 00:19:05.203 { 00:19:05.203 "name": "BaseBdev2", 00:19:05.203 "uuid": "32c81aa2-8dcc-5a21-9e14-75945f23acb5", 00:19:05.203 "is_configured": true, 00:19:05.203 "data_offset": 2048, 00:19:05.203 "data_size": 63488 00:19:05.203 } 00:19:05.203 ] 00:19:05.203 }' 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.203 17:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 17:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:05.465 17:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:05.465 [2024-11-08 17:06:42.158121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.408 "name": "raid_bdev1", 00:19:06.408 "uuid": "ae74de7f-1189-4dc7-95a8-903b71af8ebf", 00:19:06.408 "strip_size_kb": 64, 00:19:06.408 "state": "online", 00:19:06.408 "raid_level": "concat", 00:19:06.408 "superblock": true, 00:19:06.408 "num_base_bdevs": 2, 00:19:06.408 "num_base_bdevs_discovered": 2, 00:19:06.408 "num_base_bdevs_operational": 2, 00:19:06.408 "base_bdevs_list": [ 00:19:06.408 { 00:19:06.408 "name": "BaseBdev1", 00:19:06.408 "uuid": "6f481fe5-4bb9-5855-8dc7-f8fda6a75b55", 00:19:06.408 "is_configured": true, 00:19:06.408 "data_offset": 2048, 00:19:06.408 "data_size": 63488 00:19:06.408 }, 00:19:06.408 { 00:19:06.408 "name": "BaseBdev2", 00:19:06.408 "uuid": "32c81aa2-8dcc-5a21-9e14-75945f23acb5", 00:19:06.408 "is_configured": true, 00:19:06.408 "data_offset": 2048, 00:19:06.408 "data_size": 63488 00:19:06.408 } 00:19:06.408 ] 00:19:06.408 }' 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.408 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.981 [2024-11-08 17:06:43.418663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.981 [2024-11-08 17:06:43.418718] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.981 [2024-11-08 17:06:43.422061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.981 [2024-11-08 17:06:43.422135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.981 [2024-11-08 17:06:43.422179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.981 [2024-11-08 17:06:43.422199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:06.981 { 00:19:06.981 "results": [ 00:19:06.981 { 00:19:06.981 "job": "raid_bdev1", 00:19:06.981 "core_mask": "0x1", 00:19:06.981 "workload": "randrw", 00:19:06.981 "percentage": 50, 00:19:06.981 "status": "finished", 00:19:06.981 "queue_depth": 1, 00:19:06.981 "io_size": 131072, 00:19:06.981 "runtime": 1.258098, 00:19:06.981 "iops": 11770.148271438315, 00:19:06.981 "mibps": 1471.2685339297893, 00:19:06.981 "io_failed": 1, 00:19:06.981 "io_timeout": 0, 00:19:06.981 "avg_latency_us": 118.40230379654786, 00:19:06.981 "min_latency_us": 33.08307692307692, 00:19:06.981 "max_latency_us": 1789.636923076923 00:19:06.981 } 00:19:06.981 ], 00:19:06.981 "core_count": 1 00:19:06.981 } 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61497 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61497 ']' 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61497 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61497 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:06.981 killing process with pid 61497 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61497' 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61497 00:19:06.981 17:06:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61497 00:19:06.981 [2024-11-08 17:06:43.450534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.981 [2024-11-08 17:06:43.561801] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kR77SQPioG 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:19:07.925 00:19:07.925 real 0m3.821s 00:19:07.925 user 0m4.506s 00:19:07.925 sys 0m0.447s 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:07.925 ************************************ 00:19:07.925 END TEST raid_write_error_test 00:19:07.925 ************************************ 00:19:07.925 17:06:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.925 17:06:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:07.925 17:06:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:19:07.925 17:06:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:07.925 17:06:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:07.925 17:06:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.925 ************************************ 00:19:07.925 START TEST raid_state_function_test 00:19:07.925 ************************************ 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.925 Process raid pid: 61635 00:19:07.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61635 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61635' 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61635 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61635 ']' 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.925 17:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:08.186 [2024-11-08 17:06:44.648413] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:08.186 [2024-11-08 17:06:44.648609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.186 [2024-11-08 17:06:44.814779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.446 [2024-11-08 17:06:44.982298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.706 [2024-11-08 17:06:45.173693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.706 [2024-11-08 17:06:45.173803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.968 [2024-11-08 17:06:45.551328] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.968 [2024-11-08 17:06:45.551421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.968 [2024-11-08 17:06:45.551435] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.968 [2024-11-08 17:06:45.551448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.968 "name": "Existed_Raid", 00:19:08.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.968 "strip_size_kb": 0, 00:19:08.968 "state": "configuring", 00:19:08.968 "raid_level": "raid1", 00:19:08.968 "superblock": false, 00:19:08.968 "num_base_bdevs": 2, 00:19:08.968 "num_base_bdevs_discovered": 0, 00:19:08.968 "num_base_bdevs_operational": 2, 00:19:08.968 "base_bdevs_list": [ 00:19:08.968 { 00:19:08.968 "name": "BaseBdev1", 00:19:08.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.968 "is_configured": false, 00:19:08.968 "data_offset": 0, 00:19:08.968 "data_size": 0 00:19:08.968 }, 00:19:08.968 { 00:19:08.968 "name": "BaseBdev2", 00:19:08.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.968 "is_configured": false, 00:19:08.968 "data_offset": 0, 00:19:08.968 "data_size": 0 00:19:08.968 } 00:19:08.968 ] 00:19:08.968 }' 00:19:08.968 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.969 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.228 [2024-11-08 17:06:45.895395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.228 [2024-11-08 17:06:45.895455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.228 [2024-11-08 17:06:45.903342] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.228 [2024-11-08 17:06:45.903412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.228 [2024-11-08 17:06:45.903425] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.228 [2024-11-08 17:06:45.903441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.228 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.487 [2024-11-08 17:06:45.945438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.487 BaseBdev1 00:19:09.487 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.487 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:09.487 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:09.487 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.488 [ 00:19:09.488 { 00:19:09.488 "name": "BaseBdev1", 00:19:09.488 "aliases": [ 00:19:09.488 "254aff52-8356-439e-a0c9-d9753f348938" 00:19:09.488 ], 00:19:09.488 "product_name": "Malloc disk", 00:19:09.488 "block_size": 512, 00:19:09.488 "num_blocks": 65536, 00:19:09.488 "uuid": "254aff52-8356-439e-a0c9-d9753f348938", 00:19:09.488 "assigned_rate_limits": { 00:19:09.488 "rw_ios_per_sec": 0, 00:19:09.488 "rw_mbytes_per_sec": 0, 00:19:09.488 "r_mbytes_per_sec": 0, 00:19:09.488 "w_mbytes_per_sec": 0 00:19:09.488 }, 00:19:09.488 "claimed": true, 00:19:09.488 "claim_type": "exclusive_write", 00:19:09.488 "zoned": false, 00:19:09.488 "supported_io_types": { 00:19:09.488 "read": true, 00:19:09.488 "write": true, 00:19:09.488 "unmap": true, 00:19:09.488 "flush": true, 00:19:09.488 "reset": true, 00:19:09.488 "nvme_admin": false, 00:19:09.488 "nvme_io": false, 00:19:09.488 "nvme_io_md": false, 00:19:09.488 "write_zeroes": true, 00:19:09.488 "zcopy": true, 00:19:09.488 "get_zone_info": false, 00:19:09.488 "zone_management": false, 00:19:09.488 "zone_append": false, 00:19:09.488 "compare": false, 00:19:09.488 "compare_and_write": false, 00:19:09.488 "abort": true, 00:19:09.488 "seek_hole": false, 00:19:09.488 "seek_data": false, 00:19:09.488 "copy": true, 00:19:09.488 "nvme_iov_md": false 00:19:09.488 }, 00:19:09.488 "memory_domains": [ 00:19:09.488 { 00:19:09.488 "dma_device_id": "system", 00:19:09.488 "dma_device_type": 1 00:19:09.488 }, 00:19:09.488 { 00:19:09.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.488 "dma_device_type": 2 00:19:09.488 } 00:19:09.488 ], 00:19:09.488 "driver_specific": {} 00:19:09.488 } 00:19:09.488 ] 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.488 17:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.488 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.488 "name": "Existed_Raid", 00:19:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.488 "strip_size_kb": 0, 00:19:09.488 "state": "configuring", 00:19:09.488 "raid_level": "raid1", 00:19:09.488 "superblock": false, 00:19:09.488 "num_base_bdevs": 2, 00:19:09.488 "num_base_bdevs_discovered": 1, 00:19:09.488 "num_base_bdevs_operational": 2, 00:19:09.488 "base_bdevs_list": [ 00:19:09.488 { 00:19:09.488 "name": "BaseBdev1", 00:19:09.488 "uuid": "254aff52-8356-439e-a0c9-d9753f348938", 00:19:09.488 "is_configured": true, 00:19:09.488 "data_offset": 0, 00:19:09.488 "data_size": 65536 00:19:09.488 }, 00:19:09.488 { 00:19:09.488 "name": "BaseBdev2", 00:19:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.488 "is_configured": false, 00:19:09.488 "data_offset": 0, 00:19:09.488 "data_size": 0 00:19:09.488 } 00:19:09.488 ] 00:19:09.488 }' 00:19:09.488 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.488 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.747 [2024-11-08 17:06:46.329603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.747 [2024-11-08 17:06:46.329921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.747 [2024-11-08 17:06:46.341652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.747 [2024-11-08 17:06:46.344170] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.747 [2024-11-08 17:06:46.344282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.747 "name": "Existed_Raid", 00:19:09.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.747 "strip_size_kb": 0, 00:19:09.747 "state": "configuring", 00:19:09.747 "raid_level": "raid1", 00:19:09.747 "superblock": false, 00:19:09.747 "num_base_bdevs": 2, 00:19:09.747 "num_base_bdevs_discovered": 1, 00:19:09.747 "num_base_bdevs_operational": 2, 00:19:09.747 "base_bdevs_list": [ 00:19:09.747 { 00:19:09.747 "name": "BaseBdev1", 00:19:09.747 "uuid": "254aff52-8356-439e-a0c9-d9753f348938", 00:19:09.747 "is_configured": true, 00:19:09.747 "data_offset": 0, 00:19:09.747 "data_size": 65536 00:19:09.747 }, 00:19:09.747 { 00:19:09.747 "name": "BaseBdev2", 00:19:09.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.747 "is_configured": false, 00:19:09.747 "data_offset": 0, 00:19:09.747 "data_size": 0 00:19:09.747 } 00:19:09.747 ] 00:19:09.747 }' 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.747 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.008 [2024-11-08 17:06:46.699242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:10.008 [2024-11-08 17:06:46.699655] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:10.008 [2024-11-08 17:06:46.699700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:10.008 [2024-11-08 17:06:46.700265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:10.008 [2024-11-08 17:06:46.700702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:10.008 [2024-11-08 17:06:46.700730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:10.008 [2024-11-08 17:06:46.701141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.008 BaseBdev2 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.008 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.270 [ 00:19:10.270 { 00:19:10.270 "name": "BaseBdev2", 00:19:10.270 "aliases": [ 00:19:10.270 "33f74804-fa90-43ed-9491-c4e11c6c2c6d" 00:19:10.270 ], 00:19:10.270 "product_name": "Malloc disk", 00:19:10.270 "block_size": 512, 00:19:10.270 "num_blocks": 65536, 00:19:10.270 "uuid": "33f74804-fa90-43ed-9491-c4e11c6c2c6d", 00:19:10.270 "assigned_rate_limits": { 00:19:10.270 "rw_ios_per_sec": 0, 00:19:10.270 "rw_mbytes_per_sec": 0, 00:19:10.270 "r_mbytes_per_sec": 0, 00:19:10.270 "w_mbytes_per_sec": 0 00:19:10.270 }, 00:19:10.270 "claimed": true, 00:19:10.270 "claim_type": "exclusive_write", 00:19:10.270 "zoned": false, 00:19:10.270 "supported_io_types": { 00:19:10.270 "read": true, 00:19:10.270 "write": true, 00:19:10.270 "unmap": true, 00:19:10.270 "flush": true, 00:19:10.270 "reset": true, 00:19:10.270 "nvme_admin": false, 00:19:10.270 "nvme_io": false, 00:19:10.270 "nvme_io_md": false, 00:19:10.270 "write_zeroes": true, 00:19:10.270 "zcopy": true, 00:19:10.270 "get_zone_info": false, 00:19:10.270 "zone_management": false, 00:19:10.270 "zone_append": false, 00:19:10.270 "compare": false, 00:19:10.270 "compare_and_write": false, 00:19:10.270 "abort": true, 00:19:10.270 "seek_hole": false, 00:19:10.270 "seek_data": false, 00:19:10.270 "copy": true, 00:19:10.270 "nvme_iov_md": false 00:19:10.270 }, 00:19:10.270 "memory_domains": [ 00:19:10.270 { 00:19:10.270 "dma_device_id": "system", 00:19:10.270 "dma_device_type": 1 00:19:10.270 }, 00:19:10.270 { 00:19:10.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.270 "dma_device_type": 2 00:19:10.270 } 00:19:10.270 ], 00:19:10.270 "driver_specific": {} 00:19:10.270 } 00:19:10.270 ] 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.270 "name": "Existed_Raid", 00:19:10.270 "uuid": "b051edd7-3bc7-4799-ae0d-e72c37e4e530", 00:19:10.270 "strip_size_kb": 0, 00:19:10.270 "state": "online", 00:19:10.270 "raid_level": "raid1", 00:19:10.270 "superblock": false, 00:19:10.270 "num_base_bdevs": 2, 00:19:10.270 "num_base_bdevs_discovered": 2, 00:19:10.270 "num_base_bdevs_operational": 2, 00:19:10.270 "base_bdevs_list": [ 00:19:10.270 { 00:19:10.270 "name": "BaseBdev1", 00:19:10.270 "uuid": "254aff52-8356-439e-a0c9-d9753f348938", 00:19:10.270 "is_configured": true, 00:19:10.270 "data_offset": 0, 00:19:10.270 "data_size": 65536 00:19:10.270 }, 00:19:10.270 { 00:19:10.270 "name": "BaseBdev2", 00:19:10.270 "uuid": "33f74804-fa90-43ed-9491-c4e11c6c2c6d", 00:19:10.270 "is_configured": true, 00:19:10.270 "data_offset": 0, 00:19:10.270 "data_size": 65536 00:19:10.270 } 00:19:10.270 ] 00:19:10.270 }' 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.270 17:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.530 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:10.531 [2024-11-08 17:06:47.071815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:10.531 "name": "Existed_Raid", 00:19:10.531 "aliases": [ 00:19:10.531 "b051edd7-3bc7-4799-ae0d-e72c37e4e530" 00:19:10.531 ], 00:19:10.531 "product_name": "Raid Volume", 00:19:10.531 "block_size": 512, 00:19:10.531 "num_blocks": 65536, 00:19:10.531 "uuid": "b051edd7-3bc7-4799-ae0d-e72c37e4e530", 00:19:10.531 "assigned_rate_limits": { 00:19:10.531 "rw_ios_per_sec": 0, 00:19:10.531 "rw_mbytes_per_sec": 0, 00:19:10.531 "r_mbytes_per_sec": 0, 00:19:10.531 "w_mbytes_per_sec": 0 00:19:10.531 }, 00:19:10.531 "claimed": false, 00:19:10.531 "zoned": false, 00:19:10.531 "supported_io_types": { 00:19:10.531 "read": true, 00:19:10.531 "write": true, 00:19:10.531 "unmap": false, 00:19:10.531 "flush": false, 00:19:10.531 "reset": true, 00:19:10.531 "nvme_admin": false, 00:19:10.531 "nvme_io": false, 00:19:10.531 "nvme_io_md": false, 00:19:10.531 "write_zeroes": true, 00:19:10.531 "zcopy": false, 00:19:10.531 "get_zone_info": false, 00:19:10.531 "zone_management": false, 00:19:10.531 "zone_append": false, 00:19:10.531 "compare": false, 00:19:10.531 "compare_and_write": false, 00:19:10.531 "abort": false, 00:19:10.531 "seek_hole": false, 00:19:10.531 "seek_data": false, 00:19:10.531 "copy": false, 00:19:10.531 "nvme_iov_md": false 00:19:10.531 }, 00:19:10.531 "memory_domains": [ 00:19:10.531 { 00:19:10.531 "dma_device_id": "system", 00:19:10.531 "dma_device_type": 1 00:19:10.531 }, 00:19:10.531 { 00:19:10.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.531 "dma_device_type": 2 00:19:10.531 }, 00:19:10.531 { 00:19:10.531 "dma_device_id": "system", 00:19:10.531 "dma_device_type": 1 00:19:10.531 }, 00:19:10.531 { 00:19:10.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.531 "dma_device_type": 2 00:19:10.531 } 00:19:10.531 ], 00:19:10.531 "driver_specific": { 00:19:10.531 "raid": { 00:19:10.531 "uuid": "b051edd7-3bc7-4799-ae0d-e72c37e4e530", 00:19:10.531 "strip_size_kb": 0, 00:19:10.531 "state": "online", 00:19:10.531 "raid_level": "raid1", 00:19:10.531 "superblock": false, 00:19:10.531 "num_base_bdevs": 2, 00:19:10.531 "num_base_bdevs_discovered": 2, 00:19:10.531 "num_base_bdevs_operational": 2, 00:19:10.531 "base_bdevs_list": [ 00:19:10.531 { 00:19:10.531 "name": "BaseBdev1", 00:19:10.531 "uuid": "254aff52-8356-439e-a0c9-d9753f348938", 00:19:10.531 "is_configured": true, 00:19:10.531 "data_offset": 0, 00:19:10.531 "data_size": 65536 00:19:10.531 }, 00:19:10.531 { 00:19:10.531 "name": "BaseBdev2", 00:19:10.531 "uuid": "33f74804-fa90-43ed-9491-c4e11c6c2c6d", 00:19:10.531 "is_configured": true, 00:19:10.531 "data_offset": 0, 00:19:10.531 "data_size": 65536 00:19:10.531 } 00:19:10.531 ] 00:19:10.531 } 00:19:10.531 } 00:19:10.531 }' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:10.531 BaseBdev2' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.531 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.531 [2024-11-08 17:06:47.227532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.802 "name": "Existed_Raid", 00:19:10.802 "uuid": "b051edd7-3bc7-4799-ae0d-e72c37e4e530", 00:19:10.802 "strip_size_kb": 0, 00:19:10.802 "state": "online", 00:19:10.802 "raid_level": "raid1", 00:19:10.802 "superblock": false, 00:19:10.802 "num_base_bdevs": 2, 00:19:10.802 "num_base_bdevs_discovered": 1, 00:19:10.802 "num_base_bdevs_operational": 1, 00:19:10.802 "base_bdevs_list": [ 00:19:10.802 { 00:19:10.802 "name": null, 00:19:10.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.802 "is_configured": false, 00:19:10.802 "data_offset": 0, 00:19:10.802 "data_size": 65536 00:19:10.802 }, 00:19:10.802 { 00:19:10.802 "name": "BaseBdev2", 00:19:10.802 "uuid": "33f74804-fa90-43ed-9491-c4e11c6c2c6d", 00:19:10.802 "is_configured": true, 00:19:10.802 "data_offset": 0, 00:19:10.802 "data_size": 65536 00:19:10.802 } 00:19:10.802 ] 00:19:10.802 }' 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.802 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 [2024-11-08 17:06:47.681093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:11.066 [2024-11-08 17:06:47.681249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.066 [2024-11-08 17:06:47.754967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.066 [2024-11-08 17:06:47.755363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.066 [2024-11-08 17:06:47.755397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:11.066 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61635 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61635 ']' 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61635 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61635 00:19:11.328 killing process with pid 61635 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61635' 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61635 00:19:11.328 17:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61635 00:19:11.328 [2024-11-08 17:06:47.821369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:11.328 [2024-11-08 17:06:47.834197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:12.269 17:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:12.269 00:19:12.269 real 0m4.164s 00:19:12.269 user 0m5.702s 00:19:12.269 sys 0m0.834s 00:19:12.269 ************************************ 00:19:12.269 END TEST raid_state_function_test 00:19:12.269 ************************************ 00:19:12.269 17:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:12.269 17:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.269 17:06:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:19:12.269 17:06:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:12.269 17:06:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:12.269 17:06:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.269 ************************************ 00:19:12.269 START TEST raid_state_function_test_sb 00:19:12.269 ************************************ 00:19:12.269 17:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:12.269 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:12.270 Process raid pid: 61877 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61877 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61877' 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61877 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61877 ']' 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:12.270 17:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.270 [2024-11-08 17:06:48.904722] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:12.270 [2024-11-08 17:06:48.905225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.531 [2024-11-08 17:06:49.070387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.531 [2024-11-08 17:06:49.236858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.791 [2024-11-08 17:06:49.421491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.791 [2024-11-08 17:06:49.421576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.360 [2024-11-08 17:06:49.814566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.360 [2024-11-08 17:06:49.814665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.360 [2024-11-08 17:06:49.814679] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.360 [2024-11-08 17:06:49.814690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.360 "name": "Existed_Raid", 00:19:13.360 "uuid": "ab8b3856-170c-414e-8b46-6eda00c62fba", 00:19:13.360 "strip_size_kb": 0, 00:19:13.360 "state": "configuring", 00:19:13.360 "raid_level": "raid1", 00:19:13.360 "superblock": true, 00:19:13.360 "num_base_bdevs": 2, 00:19:13.360 "num_base_bdevs_discovered": 0, 00:19:13.360 "num_base_bdevs_operational": 2, 00:19:13.360 "base_bdevs_list": [ 00:19:13.360 { 00:19:13.360 "name": "BaseBdev1", 00:19:13.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.360 "is_configured": false, 00:19:13.360 "data_offset": 0, 00:19:13.360 "data_size": 0 00:19:13.360 }, 00:19:13.360 { 00:19:13.360 "name": "BaseBdev2", 00:19:13.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.360 "is_configured": false, 00:19:13.360 "data_offset": 0, 00:19:13.360 "data_size": 0 00:19:13.360 } 00:19:13.360 ] 00:19:13.360 }' 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.360 17:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.622 [2024-11-08 17:06:50.162576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.622 [2024-11-08 17:06:50.162636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.622 [2024-11-08 17:06:50.174609] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.622 [2024-11-08 17:06:50.174851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.622 [2024-11-08 17:06:50.174936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.622 [2024-11-08 17:06:50.174973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.622 [2024-11-08 17:06:50.220867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.622 BaseBdev1 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.622 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.622 [ 00:19:13.622 { 00:19:13.622 "name": "BaseBdev1", 00:19:13.622 "aliases": [ 00:19:13.622 "b54ab295-1097-41f5-8d2a-292c219cc890" 00:19:13.622 ], 00:19:13.622 "product_name": "Malloc disk", 00:19:13.622 "block_size": 512, 00:19:13.622 "num_blocks": 65536, 00:19:13.622 "uuid": "b54ab295-1097-41f5-8d2a-292c219cc890", 00:19:13.622 "assigned_rate_limits": { 00:19:13.622 "rw_ios_per_sec": 0, 00:19:13.622 "rw_mbytes_per_sec": 0, 00:19:13.622 "r_mbytes_per_sec": 0, 00:19:13.622 "w_mbytes_per_sec": 0 00:19:13.622 }, 00:19:13.622 "claimed": true, 00:19:13.622 "claim_type": "exclusive_write", 00:19:13.622 "zoned": false, 00:19:13.622 "supported_io_types": { 00:19:13.622 "read": true, 00:19:13.622 "write": true, 00:19:13.622 "unmap": true, 00:19:13.622 "flush": true, 00:19:13.622 "reset": true, 00:19:13.622 "nvme_admin": false, 00:19:13.622 "nvme_io": false, 00:19:13.622 "nvme_io_md": false, 00:19:13.622 "write_zeroes": true, 00:19:13.622 "zcopy": true, 00:19:13.622 "get_zone_info": false, 00:19:13.622 "zone_management": false, 00:19:13.622 "zone_append": false, 00:19:13.622 "compare": false, 00:19:13.622 "compare_and_write": false, 00:19:13.622 "abort": true, 00:19:13.622 "seek_hole": false, 00:19:13.622 "seek_data": false, 00:19:13.623 "copy": true, 00:19:13.623 "nvme_iov_md": false 00:19:13.623 }, 00:19:13.623 "memory_domains": [ 00:19:13.623 { 00:19:13.623 "dma_device_id": "system", 00:19:13.623 "dma_device_type": 1 00:19:13.623 }, 00:19:13.623 { 00:19:13.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.623 "dma_device_type": 2 00:19:13.623 } 00:19:13.623 ], 00:19:13.623 "driver_specific": {} 00:19:13.623 } 00:19:13.623 ] 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.623 "name": "Existed_Raid", 00:19:13.623 "uuid": "75181ea0-56a3-491c-baf0-79dd097dc08c", 00:19:13.623 "strip_size_kb": 0, 00:19:13.623 "state": "configuring", 00:19:13.623 "raid_level": "raid1", 00:19:13.623 "superblock": true, 00:19:13.623 "num_base_bdevs": 2, 00:19:13.623 "num_base_bdevs_discovered": 1, 00:19:13.623 "num_base_bdevs_operational": 2, 00:19:13.623 "base_bdevs_list": [ 00:19:13.623 { 00:19:13.623 "name": "BaseBdev1", 00:19:13.623 "uuid": "b54ab295-1097-41f5-8d2a-292c219cc890", 00:19:13.623 "is_configured": true, 00:19:13.623 "data_offset": 2048, 00:19:13.623 "data_size": 63488 00:19:13.623 }, 00:19:13.623 { 00:19:13.623 "name": "BaseBdev2", 00:19:13.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.623 "is_configured": false, 00:19:13.623 "data_offset": 0, 00:19:13.623 "data_size": 0 00:19:13.623 } 00:19:13.623 ] 00:19:13.623 }' 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.623 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.883 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:13.883 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.884 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.884 [2024-11-08 17:06:50.585007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.884 [2024-11-08 17:06:50.585095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:13.884 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.884 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:13.884 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.884 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.884 [2024-11-08 17:06:50.593109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.884 [2024-11-08 17:06:50.595616] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.884 [2024-11-08 17:06:50.595693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.144 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.145 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.145 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.145 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.145 "name": "Existed_Raid", 00:19:14.145 "uuid": "6c3e238d-f7aa-4f33-9f01-2afe396d8bbe", 00:19:14.145 "strip_size_kb": 0, 00:19:14.145 "state": "configuring", 00:19:14.145 "raid_level": "raid1", 00:19:14.145 "superblock": true, 00:19:14.145 "num_base_bdevs": 2, 00:19:14.145 "num_base_bdevs_discovered": 1, 00:19:14.145 "num_base_bdevs_operational": 2, 00:19:14.145 "base_bdevs_list": [ 00:19:14.145 { 00:19:14.145 "name": "BaseBdev1", 00:19:14.145 "uuid": "b54ab295-1097-41f5-8d2a-292c219cc890", 00:19:14.145 "is_configured": true, 00:19:14.145 "data_offset": 2048, 00:19:14.145 "data_size": 63488 00:19:14.145 }, 00:19:14.145 { 00:19:14.145 "name": "BaseBdev2", 00:19:14.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.145 "is_configured": false, 00:19:14.145 "data_offset": 0, 00:19:14.145 "data_size": 0 00:19:14.145 } 00:19:14.145 ] 00:19:14.145 }' 00:19:14.145 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.145 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.407 [2024-11-08 17:06:50.948720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.407 [2024-11-08 17:06:50.949128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:14.407 [2024-11-08 17:06:50.949145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:14.407 [2024-11-08 17:06:50.949481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:14.407 [2024-11-08 17:06:50.949664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:14.407 [2024-11-08 17:06:50.949677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:14.407 [2024-11-08 17:06:50.949888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.407 BaseBdev2 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.407 [ 00:19:14.407 { 00:19:14.407 "name": "BaseBdev2", 00:19:14.407 "aliases": [ 00:19:14.407 "572771a5-c82d-4cf5-814b-76a8a42cf80f" 00:19:14.407 ], 00:19:14.407 "product_name": "Malloc disk", 00:19:14.407 "block_size": 512, 00:19:14.407 "num_blocks": 65536, 00:19:14.407 "uuid": "572771a5-c82d-4cf5-814b-76a8a42cf80f", 00:19:14.407 "assigned_rate_limits": { 00:19:14.407 "rw_ios_per_sec": 0, 00:19:14.407 "rw_mbytes_per_sec": 0, 00:19:14.407 "r_mbytes_per_sec": 0, 00:19:14.407 "w_mbytes_per_sec": 0 00:19:14.407 }, 00:19:14.407 "claimed": true, 00:19:14.407 "claim_type": "exclusive_write", 00:19:14.407 "zoned": false, 00:19:14.407 "supported_io_types": { 00:19:14.407 "read": true, 00:19:14.407 "write": true, 00:19:14.407 "unmap": true, 00:19:14.407 "flush": true, 00:19:14.407 "reset": true, 00:19:14.407 "nvme_admin": false, 00:19:14.407 "nvme_io": false, 00:19:14.407 "nvme_io_md": false, 00:19:14.407 "write_zeroes": true, 00:19:14.407 "zcopy": true, 00:19:14.407 "get_zone_info": false, 00:19:14.407 "zone_management": false, 00:19:14.407 "zone_append": false, 00:19:14.407 "compare": false, 00:19:14.407 "compare_and_write": false, 00:19:14.407 "abort": true, 00:19:14.407 "seek_hole": false, 00:19:14.407 "seek_data": false, 00:19:14.407 "copy": true, 00:19:14.407 "nvme_iov_md": false 00:19:14.407 }, 00:19:14.407 "memory_domains": [ 00:19:14.407 { 00:19:14.407 "dma_device_id": "system", 00:19:14.407 "dma_device_type": 1 00:19:14.407 }, 00:19:14.407 { 00:19:14.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.407 "dma_device_type": 2 00:19:14.407 } 00:19:14.407 ], 00:19:14.407 "driver_specific": {} 00:19:14.407 } 00:19:14.407 ] 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.407 17:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.407 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.407 "name": "Existed_Raid", 00:19:14.407 "uuid": "6c3e238d-f7aa-4f33-9f01-2afe396d8bbe", 00:19:14.407 "strip_size_kb": 0, 00:19:14.407 "state": "online", 00:19:14.407 "raid_level": "raid1", 00:19:14.407 "superblock": true, 00:19:14.407 "num_base_bdevs": 2, 00:19:14.407 "num_base_bdevs_discovered": 2, 00:19:14.407 "num_base_bdevs_operational": 2, 00:19:14.407 "base_bdevs_list": [ 00:19:14.407 { 00:19:14.407 "name": "BaseBdev1", 00:19:14.407 "uuid": "b54ab295-1097-41f5-8d2a-292c219cc890", 00:19:14.407 "is_configured": true, 00:19:14.407 "data_offset": 2048, 00:19:14.407 "data_size": 63488 00:19:14.407 }, 00:19:14.407 { 00:19:14.407 "name": "BaseBdev2", 00:19:14.407 "uuid": "572771a5-c82d-4cf5-814b-76a8a42cf80f", 00:19:14.407 "is_configured": true, 00:19:14.407 "data_offset": 2048, 00:19:14.407 "data_size": 63488 00:19:14.407 } 00:19:14.407 ] 00:19:14.407 }' 00:19:14.407 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.407 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:14.668 [2024-11-08 17:06:51.309240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:14.668 "name": "Existed_Raid", 00:19:14.668 "aliases": [ 00:19:14.668 "6c3e238d-f7aa-4f33-9f01-2afe396d8bbe" 00:19:14.668 ], 00:19:14.668 "product_name": "Raid Volume", 00:19:14.668 "block_size": 512, 00:19:14.668 "num_blocks": 63488, 00:19:14.668 "uuid": "6c3e238d-f7aa-4f33-9f01-2afe396d8bbe", 00:19:14.668 "assigned_rate_limits": { 00:19:14.668 "rw_ios_per_sec": 0, 00:19:14.668 "rw_mbytes_per_sec": 0, 00:19:14.668 "r_mbytes_per_sec": 0, 00:19:14.668 "w_mbytes_per_sec": 0 00:19:14.668 }, 00:19:14.668 "claimed": false, 00:19:14.668 "zoned": false, 00:19:14.668 "supported_io_types": { 00:19:14.668 "read": true, 00:19:14.668 "write": true, 00:19:14.668 "unmap": false, 00:19:14.668 "flush": false, 00:19:14.668 "reset": true, 00:19:14.668 "nvme_admin": false, 00:19:14.668 "nvme_io": false, 00:19:14.668 "nvme_io_md": false, 00:19:14.668 "write_zeroes": true, 00:19:14.668 "zcopy": false, 00:19:14.668 "get_zone_info": false, 00:19:14.668 "zone_management": false, 00:19:14.668 "zone_append": false, 00:19:14.668 "compare": false, 00:19:14.668 "compare_and_write": false, 00:19:14.668 "abort": false, 00:19:14.668 "seek_hole": false, 00:19:14.668 "seek_data": false, 00:19:14.668 "copy": false, 00:19:14.668 "nvme_iov_md": false 00:19:14.668 }, 00:19:14.668 "memory_domains": [ 00:19:14.668 { 00:19:14.668 "dma_device_id": "system", 00:19:14.668 "dma_device_type": 1 00:19:14.668 }, 00:19:14.668 { 00:19:14.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.668 "dma_device_type": 2 00:19:14.668 }, 00:19:14.668 { 00:19:14.668 "dma_device_id": "system", 00:19:14.668 "dma_device_type": 1 00:19:14.668 }, 00:19:14.668 { 00:19:14.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.668 "dma_device_type": 2 00:19:14.668 } 00:19:14.668 ], 00:19:14.668 "driver_specific": { 00:19:14.668 "raid": { 00:19:14.668 "uuid": "6c3e238d-f7aa-4f33-9f01-2afe396d8bbe", 00:19:14.668 "strip_size_kb": 0, 00:19:14.668 "state": "online", 00:19:14.668 "raid_level": "raid1", 00:19:14.668 "superblock": true, 00:19:14.668 "num_base_bdevs": 2, 00:19:14.668 "num_base_bdevs_discovered": 2, 00:19:14.668 "num_base_bdevs_operational": 2, 00:19:14.668 "base_bdevs_list": [ 00:19:14.668 { 00:19:14.668 "name": "BaseBdev1", 00:19:14.668 "uuid": "b54ab295-1097-41f5-8d2a-292c219cc890", 00:19:14.668 "is_configured": true, 00:19:14.668 "data_offset": 2048, 00:19:14.668 "data_size": 63488 00:19:14.668 }, 00:19:14.668 { 00:19:14.668 "name": "BaseBdev2", 00:19:14.668 "uuid": "572771a5-c82d-4cf5-814b-76a8a42cf80f", 00:19:14.668 "is_configured": true, 00:19:14.668 "data_offset": 2048, 00:19:14.668 "data_size": 63488 00:19:14.668 } 00:19:14.668 ] 00:19:14.668 } 00:19:14.668 } 00:19:14.668 }' 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:14.668 BaseBdev2' 00:19:14.668 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.930 [2024-11-08 17:06:51.485025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.930 "name": "Existed_Raid", 00:19:14.930 "uuid": "6c3e238d-f7aa-4f33-9f01-2afe396d8bbe", 00:19:14.930 "strip_size_kb": 0, 00:19:14.930 "state": "online", 00:19:14.930 "raid_level": "raid1", 00:19:14.930 "superblock": true, 00:19:14.930 "num_base_bdevs": 2, 00:19:14.930 "num_base_bdevs_discovered": 1, 00:19:14.930 "num_base_bdevs_operational": 1, 00:19:14.930 "base_bdevs_list": [ 00:19:14.930 { 00:19:14.930 "name": null, 00:19:14.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.930 "is_configured": false, 00:19:14.930 "data_offset": 0, 00:19:14.930 "data_size": 63488 00:19:14.930 }, 00:19:14.930 { 00:19:14.930 "name": "BaseBdev2", 00:19:14.930 "uuid": "572771a5-c82d-4cf5-814b-76a8a42cf80f", 00:19:14.930 "is_configured": true, 00:19:14.930 "data_offset": 2048, 00:19:14.930 "data_size": 63488 00:19:14.930 } 00:19:14.930 ] 00:19:14.930 }' 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.930 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.191 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:15.191 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:15.191 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.191 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:15.191 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.191 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.452 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.452 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:15.452 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.452 17:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:15.452 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.452 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.452 [2024-11-08 17:06:51.923414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.452 [2024-11-08 17:06:51.923596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.452 [2024-11-08 17:06:51.998837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.452 [2024-11-08 17:06:51.998954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.452 [2024-11-08 17:06:51.998969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:15.452 17:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61877 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61877 ']' 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61877 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61877 00:19:15.452 killing process with pid 61877 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61877' 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61877 00:19:15.452 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61877 00:19:15.452 [2024-11-08 17:06:52.075450] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.452 [2024-11-08 17:06:52.088364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.393 ************************************ 00:19:16.393 END TEST raid_state_function_test_sb 00:19:16.393 ************************************ 00:19:16.394 17:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:16.394 00:19:16.394 real 0m4.166s 00:19:16.394 user 0m5.725s 00:19:16.394 sys 0m0.807s 00:19:16.394 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:16.394 17:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.394 17:06:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:19:16.394 17:06:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:16.394 17:06:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:16.394 17:06:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.394 ************************************ 00:19:16.394 START TEST raid_superblock_test 00:19:16.394 ************************************ 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62118 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62118 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62118 ']' 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:16.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:16.394 17:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.654 [2024-11-08 17:06:53.156809] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:16.654 [2024-11-08 17:06:53.157078] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62118 ] 00:19:16.654 [2024-11-08 17:06:53.338953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.915 [2024-11-08 17:06:53.507682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.211 [2024-11-08 17:06:53.694915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.211 [2024-11-08 17:06:53.694978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.472 malloc1 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.472 [2024-11-08 17:06:54.105693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:17.472 [2024-11-08 17:06:54.105852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.472 [2024-11-08 17:06:54.105883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:17.472 [2024-11-08 17:06:54.105896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.472 [2024-11-08 17:06:54.108937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.472 [2024-11-08 17:06:54.109004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:17.472 pt1 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.472 malloc2 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.472 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.472 [2024-11-08 17:06:54.159837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:17.472 [2024-11-08 17:06:54.159948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.472 [2024-11-08 17:06:54.159977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:17.472 [2024-11-08 17:06:54.159990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.472 [2024-11-08 17:06:54.163211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.472 [2024-11-08 17:06:54.163275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:17.473 pt2 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.473 [2024-11-08 17:06:54.172122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:17.473 [2024-11-08 17:06:54.174688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:17.473 [2024-11-08 17:06:54.174972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:17.473 [2024-11-08 17:06:54.175007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:17.473 [2024-11-08 17:06:54.175406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:17.473 [2024-11-08 17:06:54.175633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:17.473 [2024-11-08 17:06:54.175661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:17.473 [2024-11-08 17:06:54.175975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.473 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.734 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.734 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.734 "name": "raid_bdev1", 00:19:17.734 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:17.734 "strip_size_kb": 0, 00:19:17.734 "state": "online", 00:19:17.734 "raid_level": "raid1", 00:19:17.734 "superblock": true, 00:19:17.734 "num_base_bdevs": 2, 00:19:17.734 "num_base_bdevs_discovered": 2, 00:19:17.734 "num_base_bdevs_operational": 2, 00:19:17.734 "base_bdevs_list": [ 00:19:17.734 { 00:19:17.734 "name": "pt1", 00:19:17.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.734 "is_configured": true, 00:19:17.734 "data_offset": 2048, 00:19:17.734 "data_size": 63488 00:19:17.734 }, 00:19:17.734 { 00:19:17.734 "name": "pt2", 00:19:17.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.734 "is_configured": true, 00:19:17.734 "data_offset": 2048, 00:19:17.734 "data_size": 63488 00:19:17.734 } 00:19:17.734 ] 00:19:17.734 }' 00:19:17.734 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.734 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.996 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:17.996 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:17.996 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:17.996 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:17.996 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:17.996 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:17.996 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.997 [2024-11-08 17:06:54.524506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.997 "name": "raid_bdev1", 00:19:17.997 "aliases": [ 00:19:17.997 "d5b00ee9-c176-4607-b0a9-4d1d00dcb846" 00:19:17.997 ], 00:19:17.997 "product_name": "Raid Volume", 00:19:17.997 "block_size": 512, 00:19:17.997 "num_blocks": 63488, 00:19:17.997 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:17.997 "assigned_rate_limits": { 00:19:17.997 "rw_ios_per_sec": 0, 00:19:17.997 "rw_mbytes_per_sec": 0, 00:19:17.997 "r_mbytes_per_sec": 0, 00:19:17.997 "w_mbytes_per_sec": 0 00:19:17.997 }, 00:19:17.997 "claimed": false, 00:19:17.997 "zoned": false, 00:19:17.997 "supported_io_types": { 00:19:17.997 "read": true, 00:19:17.997 "write": true, 00:19:17.997 "unmap": false, 00:19:17.997 "flush": false, 00:19:17.997 "reset": true, 00:19:17.997 "nvme_admin": false, 00:19:17.997 "nvme_io": false, 00:19:17.997 "nvme_io_md": false, 00:19:17.997 "write_zeroes": true, 00:19:17.997 "zcopy": false, 00:19:17.997 "get_zone_info": false, 00:19:17.997 "zone_management": false, 00:19:17.997 "zone_append": false, 00:19:17.997 "compare": false, 00:19:17.997 "compare_and_write": false, 00:19:17.997 "abort": false, 00:19:17.997 "seek_hole": false, 00:19:17.997 "seek_data": false, 00:19:17.997 "copy": false, 00:19:17.997 "nvme_iov_md": false 00:19:17.997 }, 00:19:17.997 "memory_domains": [ 00:19:17.997 { 00:19:17.997 "dma_device_id": "system", 00:19:17.997 "dma_device_type": 1 00:19:17.997 }, 00:19:17.997 { 00:19:17.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.997 "dma_device_type": 2 00:19:17.997 }, 00:19:17.997 { 00:19:17.997 "dma_device_id": "system", 00:19:17.997 "dma_device_type": 1 00:19:17.997 }, 00:19:17.997 { 00:19:17.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.997 "dma_device_type": 2 00:19:17.997 } 00:19:17.997 ], 00:19:17.997 "driver_specific": { 00:19:17.997 "raid": { 00:19:17.997 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:17.997 "strip_size_kb": 0, 00:19:17.997 "state": "online", 00:19:17.997 "raid_level": "raid1", 00:19:17.997 "superblock": true, 00:19:17.997 "num_base_bdevs": 2, 00:19:17.997 "num_base_bdevs_discovered": 2, 00:19:17.997 "num_base_bdevs_operational": 2, 00:19:17.997 "base_bdevs_list": [ 00:19:17.997 { 00:19:17.997 "name": "pt1", 00:19:17.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.997 "is_configured": true, 00:19:17.997 "data_offset": 2048, 00:19:17.997 "data_size": 63488 00:19:17.997 }, 00:19:17.997 { 00:19:17.997 "name": "pt2", 00:19:17.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.997 "is_configured": true, 00:19:17.997 "data_offset": 2048, 00:19:17.997 "data_size": 63488 00:19:17.997 } 00:19:17.997 ] 00:19:17.997 } 00:19:17.997 } 00:19:17.997 }' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:17.997 pt2' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:17.997 [2024-11-08 17:06:54.684556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.997 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d5b00ee9-c176-4607-b0a9-4d1d00dcb846 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d5b00ee9-c176-4607-b0a9-4d1d00dcb846 ']' 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.258 [2024-11-08 17:06:54.716119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.258 [2024-11-08 17:06:54.716169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.258 [2024-11-08 17:06:54.716312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.258 [2024-11-08 17:06:54.716402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.258 [2024-11-08 17:06:54.716418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:18.258 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 [2024-11-08 17:06:54.828225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:18.259 [2024-11-08 17:06:54.830917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:18.259 [2024-11-08 17:06:54.831035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:18.259 [2024-11-08 17:06:54.831121] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:18.259 [2024-11-08 17:06:54.831139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.259 [2024-11-08 17:06:54.831154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:18.259 request: 00:19:18.259 { 00:19:18.259 "name": "raid_bdev1", 00:19:18.259 "raid_level": "raid1", 00:19:18.259 "base_bdevs": [ 00:19:18.259 "malloc1", 00:19:18.259 "malloc2" 00:19:18.259 ], 00:19:18.259 "superblock": false, 00:19:18.259 "method": "bdev_raid_create", 00:19:18.259 "req_id": 1 00:19:18.259 } 00:19:18.259 Got JSON-RPC error response 00:19:18.259 response: 00:19:18.259 { 00:19:18.259 "code": -17, 00:19:18.259 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:18.259 } 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 [2024-11-08 17:06:54.892203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.259 [2024-11-08 17:06:54.892321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.259 [2024-11-08 17:06:54.892347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:18.259 [2024-11-08 17:06:54.892361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.259 [2024-11-08 17:06:54.895634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.259 [2024-11-08 17:06:54.895712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.259 [2024-11-08 17:06:54.895867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:18.259 [2024-11-08 17:06:54.895957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:18.259 pt1 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.259 "name": "raid_bdev1", 00:19:18.259 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:18.259 "strip_size_kb": 0, 00:19:18.259 "state": "configuring", 00:19:18.259 "raid_level": "raid1", 00:19:18.259 "superblock": true, 00:19:18.259 "num_base_bdevs": 2, 00:19:18.259 "num_base_bdevs_discovered": 1, 00:19:18.259 "num_base_bdevs_operational": 2, 00:19:18.259 "base_bdevs_list": [ 00:19:18.259 { 00:19:18.259 "name": "pt1", 00:19:18.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.259 "is_configured": true, 00:19:18.259 "data_offset": 2048, 00:19:18.259 "data_size": 63488 00:19:18.259 }, 00:19:18.259 { 00:19:18.259 "name": null, 00:19:18.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.259 "is_configured": false, 00:19:18.259 "data_offset": 2048, 00:19:18.259 "data_size": 63488 00:19:18.259 } 00:19:18.259 ] 00:19:18.259 }' 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.259 17:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.875 [2024-11-08 17:06:55.280380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:18.875 [2024-11-08 17:06:55.280511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.875 [2024-11-08 17:06:55.280539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:18.875 [2024-11-08 17:06:55.280554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.875 [2024-11-08 17:06:55.281245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.875 [2024-11-08 17:06:55.281271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:18.875 [2024-11-08 17:06:55.281391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:18.875 [2024-11-08 17:06:55.281424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:18.875 [2024-11-08 17:06:55.281575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:18.875 [2024-11-08 17:06:55.281589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:18.875 [2024-11-08 17:06:55.281931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.875 [2024-11-08 17:06:55.282121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:18.875 [2024-11-08 17:06:55.282138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:18.875 [2024-11-08 17:06:55.282313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.875 pt2 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.875 "name": "raid_bdev1", 00:19:18.875 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:18.875 "strip_size_kb": 0, 00:19:18.875 "state": "online", 00:19:18.875 "raid_level": "raid1", 00:19:18.875 "superblock": true, 00:19:18.875 "num_base_bdevs": 2, 00:19:18.875 "num_base_bdevs_discovered": 2, 00:19:18.875 "num_base_bdevs_operational": 2, 00:19:18.875 "base_bdevs_list": [ 00:19:18.875 { 00:19:18.875 "name": "pt1", 00:19:18.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.875 "is_configured": true, 00:19:18.875 "data_offset": 2048, 00:19:18.875 "data_size": 63488 00:19:18.875 }, 00:19:18.875 { 00:19:18.875 "name": "pt2", 00:19:18.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.875 "is_configured": true, 00:19:18.875 "data_offset": 2048, 00:19:18.875 "data_size": 63488 00:19:18.875 } 00:19:18.875 ] 00:19:18.875 }' 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.875 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.137 [2024-11-08 17:06:55.628748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.137 "name": "raid_bdev1", 00:19:19.137 "aliases": [ 00:19:19.137 "d5b00ee9-c176-4607-b0a9-4d1d00dcb846" 00:19:19.137 ], 00:19:19.137 "product_name": "Raid Volume", 00:19:19.137 "block_size": 512, 00:19:19.137 "num_blocks": 63488, 00:19:19.137 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:19.137 "assigned_rate_limits": { 00:19:19.137 "rw_ios_per_sec": 0, 00:19:19.137 "rw_mbytes_per_sec": 0, 00:19:19.137 "r_mbytes_per_sec": 0, 00:19:19.137 "w_mbytes_per_sec": 0 00:19:19.137 }, 00:19:19.137 "claimed": false, 00:19:19.137 "zoned": false, 00:19:19.137 "supported_io_types": { 00:19:19.137 "read": true, 00:19:19.137 "write": true, 00:19:19.137 "unmap": false, 00:19:19.137 "flush": false, 00:19:19.137 "reset": true, 00:19:19.137 "nvme_admin": false, 00:19:19.137 "nvme_io": false, 00:19:19.137 "nvme_io_md": false, 00:19:19.137 "write_zeroes": true, 00:19:19.137 "zcopy": false, 00:19:19.137 "get_zone_info": false, 00:19:19.137 "zone_management": false, 00:19:19.137 "zone_append": false, 00:19:19.137 "compare": false, 00:19:19.137 "compare_and_write": false, 00:19:19.137 "abort": false, 00:19:19.137 "seek_hole": false, 00:19:19.137 "seek_data": false, 00:19:19.137 "copy": false, 00:19:19.137 "nvme_iov_md": false 00:19:19.137 }, 00:19:19.137 "memory_domains": [ 00:19:19.137 { 00:19:19.137 "dma_device_id": "system", 00:19:19.137 "dma_device_type": 1 00:19:19.137 }, 00:19:19.137 { 00:19:19.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.137 "dma_device_type": 2 00:19:19.137 }, 00:19:19.137 { 00:19:19.137 "dma_device_id": "system", 00:19:19.137 "dma_device_type": 1 00:19:19.137 }, 00:19:19.137 { 00:19:19.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.137 "dma_device_type": 2 00:19:19.137 } 00:19:19.137 ], 00:19:19.137 "driver_specific": { 00:19:19.137 "raid": { 00:19:19.137 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:19.137 "strip_size_kb": 0, 00:19:19.137 "state": "online", 00:19:19.137 "raid_level": "raid1", 00:19:19.137 "superblock": true, 00:19:19.137 "num_base_bdevs": 2, 00:19:19.137 "num_base_bdevs_discovered": 2, 00:19:19.137 "num_base_bdevs_operational": 2, 00:19:19.137 "base_bdevs_list": [ 00:19:19.137 { 00:19:19.137 "name": "pt1", 00:19:19.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.137 "is_configured": true, 00:19:19.137 "data_offset": 2048, 00:19:19.137 "data_size": 63488 00:19:19.137 }, 00:19:19.137 { 00:19:19.137 "name": "pt2", 00:19:19.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.137 "is_configured": true, 00:19:19.137 "data_offset": 2048, 00:19:19.137 "data_size": 63488 00:19:19.137 } 00:19:19.137 ] 00:19:19.137 } 00:19:19.137 } 00:19:19.137 }' 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:19.137 pt2' 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.137 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:19.138 [2024-11-08 17:06:55.792799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d5b00ee9-c176-4607-b0a9-4d1d00dcb846 '!=' d5b00ee9-c176-4607-b0a9-4d1d00dcb846 ']' 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.138 [2024-11-08 17:06:55.828561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.138 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.399 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.399 "name": "raid_bdev1", 00:19:19.399 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:19.399 "strip_size_kb": 0, 00:19:19.399 "state": "online", 00:19:19.399 "raid_level": "raid1", 00:19:19.399 "superblock": true, 00:19:19.399 "num_base_bdevs": 2, 00:19:19.399 "num_base_bdevs_discovered": 1, 00:19:19.399 "num_base_bdevs_operational": 1, 00:19:19.399 "base_bdevs_list": [ 00:19:19.399 { 00:19:19.399 "name": null, 00:19:19.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.399 "is_configured": false, 00:19:19.399 "data_offset": 0, 00:19:19.399 "data_size": 63488 00:19:19.399 }, 00:19:19.399 { 00:19:19.399 "name": "pt2", 00:19:19.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.399 "is_configured": true, 00:19:19.399 "data_offset": 2048, 00:19:19.399 "data_size": 63488 00:19:19.399 } 00:19:19.399 ] 00:19:19.399 }' 00:19:19.399 17:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.399 17:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.659 [2024-11-08 17:06:56.184697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.659 [2024-11-08 17:06:56.184796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.659 [2024-11-08 17:06:56.184969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.659 [2024-11-08 17:06:56.185051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.659 [2024-11-08 17:06:56.185067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.659 [2024-11-08 17:06:56.236582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:19.659 [2024-11-08 17:06:56.236729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.659 [2024-11-08 17:06:56.236772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:19.659 [2024-11-08 17:06:56.236789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.659 [2024-11-08 17:06:56.240072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.659 [2024-11-08 17:06:56.240142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:19.659 [2024-11-08 17:06:56.240280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:19.659 [2024-11-08 17:06:56.240346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:19.659 [2024-11-08 17:06:56.240481] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:19.659 [2024-11-08 17:06:56.240498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:19.659 [2024-11-08 17:06:56.240862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:19.659 [2024-11-08 17:06:56.241043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:19.659 [2024-11-08 17:06:56.241053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:19.659 [2024-11-08 17:06:56.241298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.659 pt2 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.659 "name": "raid_bdev1", 00:19:19.659 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:19.659 "strip_size_kb": 0, 00:19:19.659 "state": "online", 00:19:19.659 "raid_level": "raid1", 00:19:19.659 "superblock": true, 00:19:19.659 "num_base_bdevs": 2, 00:19:19.659 "num_base_bdevs_discovered": 1, 00:19:19.659 "num_base_bdevs_operational": 1, 00:19:19.659 "base_bdevs_list": [ 00:19:19.659 { 00:19:19.659 "name": null, 00:19:19.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.659 "is_configured": false, 00:19:19.659 "data_offset": 2048, 00:19:19.659 "data_size": 63488 00:19:19.659 }, 00:19:19.659 { 00:19:19.659 "name": "pt2", 00:19:19.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.659 "is_configured": true, 00:19:19.659 "data_offset": 2048, 00:19:19.659 "data_size": 63488 00:19:19.659 } 00:19:19.659 ] 00:19:19.659 }' 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.659 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.918 [2024-11-08 17:06:56.572692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.918 [2024-11-08 17:06:56.572744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.918 [2024-11-08 17:06:56.572897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.918 [2024-11-08 17:06:56.572974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.918 [2024-11-08 17:06:56.572986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.918 [2024-11-08 17:06:56.620865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.918 [2024-11-08 17:06:56.621004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.918 [2024-11-08 17:06:56.621046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:19.918 [2024-11-08 17:06:56.621067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.918 [2024-11-08 17:06:56.625116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.918 [2024-11-08 17:06:56.625188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.918 [2024-11-08 17:06:56.625383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:19.918 [2024-11-08 17:06:56.625469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.918 [2024-11-08 17:06:56.625852] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:19.918 [2024-11-08 17:06:56.625885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.918 [2024-11-08 17:06:56.625917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:19.918 [2024-11-08 17:06:56.626011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:19.918 [2024-11-08 17:06:56.626174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:19.918 [2024-11-08 17:06:56.626188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:19.918 pt1 00:19:19.918 [2024-11-08 17:06:56.626664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:19.918 [2024-11-08 17:06:56.626989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:19.918 [2024-11-08 17:06:56.627012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:19.918 [2024-11-08 17:06:56.627262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.918 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.214 "name": "raid_bdev1", 00:19:20.214 "uuid": "d5b00ee9-c176-4607-b0a9-4d1d00dcb846", 00:19:20.214 "strip_size_kb": 0, 00:19:20.214 "state": "online", 00:19:20.214 "raid_level": "raid1", 00:19:20.214 "superblock": true, 00:19:20.214 "num_base_bdevs": 2, 00:19:20.214 "num_base_bdevs_discovered": 1, 00:19:20.214 "num_base_bdevs_operational": 1, 00:19:20.214 "base_bdevs_list": [ 00:19:20.214 { 00:19:20.214 "name": null, 00:19:20.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.214 "is_configured": false, 00:19:20.214 "data_offset": 2048, 00:19:20.214 "data_size": 63488 00:19:20.214 }, 00:19:20.214 { 00:19:20.214 "name": "pt2", 00:19:20.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.214 "is_configured": true, 00:19:20.214 "data_offset": 2048, 00:19:20.214 "data_size": 63488 00:19:20.214 } 00:19:20.214 ] 00:19:20.214 }' 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.214 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.476 17:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:20.476 [2024-11-08 17:06:56.997954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d5b00ee9-c176-4607-b0a9-4d1d00dcb846 '!=' d5b00ee9-c176-4607-b0a9-4d1d00dcb846 ']' 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62118 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62118 ']' 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62118 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62118 00:19:20.476 killing process with pid 62118 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62118' 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62118 00:19:20.476 17:06:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62118 00:19:20.476 [2024-11-08 17:06:57.058503] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:20.476 [2024-11-08 17:06:57.058671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.476 [2024-11-08 17:06:57.058768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.476 [2024-11-08 17:06:57.058790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:20.738 [2024-11-08 17:06:57.220153] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.684 17:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:21.684 00:19:21.684 real 0m5.062s 00:19:21.684 user 0m7.320s 00:19:21.684 sys 0m1.011s 00:19:21.684 17:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:21.684 17:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.684 ************************************ 00:19:21.684 END TEST raid_superblock_test 00:19:21.684 ************************************ 00:19:21.684 17:06:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:19:21.684 17:06:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:21.684 17:06:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:21.684 17:06:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.684 ************************************ 00:19:21.684 START TEST raid_read_error_test 00:19:21.684 ************************************ 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nq4ZXM80FT 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62437 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62437 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62437 ']' 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:21.684 17:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.684 [2024-11-08 17:06:58.287950] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:21.684 [2024-11-08 17:06:58.288146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62437 ] 00:19:21.943 [2024-11-08 17:06:58.456125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.943 [2024-11-08 17:06:58.633461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.203 [2024-11-08 17:06:58.819858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.203 [2024-11-08 17:06:58.819913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 BaseBdev1_malloc 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 true 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 [2024-11-08 17:06:59.238093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:22.774 [2024-11-08 17:06:59.238178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.774 [2024-11-08 17:06:59.238206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:22.774 [2024-11-08 17:06:59.238221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.774 [2024-11-08 17:06:59.240976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.774 [2024-11-08 17:06:59.241036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.774 BaseBdev1 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 BaseBdev2_malloc 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 true 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 [2024-11-08 17:06:59.295178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:22.774 [2024-11-08 17:06:59.295266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.774 [2024-11-08 17:06:59.295292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:22.774 [2024-11-08 17:06:59.295307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.774 [2024-11-08 17:06:59.298069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.774 [2024-11-08 17:06:59.298134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:22.774 BaseBdev2 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 [2024-11-08 17:06:59.303270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.774 [2024-11-08 17:06:59.305667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.774 [2024-11-08 17:06:59.306013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:22.774 [2024-11-08 17:06:59.306050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:22.774 [2024-11-08 17:06:59.306378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:22.774 [2024-11-08 17:06:59.306602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:22.774 [2024-11-08 17:06:59.306613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:22.774 [2024-11-08 17:06:59.306814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.774 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.775 "name": "raid_bdev1", 00:19:22.775 "uuid": "f1325c40-db31-4162-9486-f564da457de3", 00:19:22.775 "strip_size_kb": 0, 00:19:22.775 "state": "online", 00:19:22.775 "raid_level": "raid1", 00:19:22.775 "superblock": true, 00:19:22.775 "num_base_bdevs": 2, 00:19:22.775 "num_base_bdevs_discovered": 2, 00:19:22.775 "num_base_bdevs_operational": 2, 00:19:22.775 "base_bdevs_list": [ 00:19:22.775 { 00:19:22.775 "name": "BaseBdev1", 00:19:22.775 "uuid": "145f8335-c1a9-533d-9edb-b2b1c7922224", 00:19:22.775 "is_configured": true, 00:19:22.775 "data_offset": 2048, 00:19:22.775 "data_size": 63488 00:19:22.775 }, 00:19:22.775 { 00:19:22.775 "name": "BaseBdev2", 00:19:22.775 "uuid": "8099d6ee-9bbd-58ed-bdb3-dc3b20b26c8d", 00:19:22.775 "is_configured": true, 00:19:22.775 "data_offset": 2048, 00:19:22.775 "data_size": 63488 00:19:22.775 } 00:19:22.775 ] 00:19:22.775 }' 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.775 17:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:23.043 17:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:23.043 [2024-11-08 17:06:59.732656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.990 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.990 "name": "raid_bdev1", 00:19:23.990 "uuid": "f1325c40-db31-4162-9486-f564da457de3", 00:19:23.990 "strip_size_kb": 0, 00:19:23.990 "state": "online", 00:19:23.990 "raid_level": "raid1", 00:19:23.990 "superblock": true, 00:19:23.990 "num_base_bdevs": 2, 00:19:23.990 "num_base_bdevs_discovered": 2, 00:19:23.990 "num_base_bdevs_operational": 2, 00:19:23.990 "base_bdevs_list": [ 00:19:23.990 { 00:19:23.990 "name": "BaseBdev1", 00:19:23.991 "uuid": "145f8335-c1a9-533d-9edb-b2b1c7922224", 00:19:23.991 "is_configured": true, 00:19:23.991 "data_offset": 2048, 00:19:23.991 "data_size": 63488 00:19:23.991 }, 00:19:23.991 { 00:19:23.991 "name": "BaseBdev2", 00:19:23.991 "uuid": "8099d6ee-9bbd-58ed-bdb3-dc3b20b26c8d", 00:19:23.991 "is_configured": true, 00:19:23.991 "data_offset": 2048, 00:19:23.991 "data_size": 63488 00:19:23.991 } 00:19:23.991 ] 00:19:23.991 }' 00:19:23.991 17:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.991 17:07:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.563 [2024-11-08 17:07:01.029560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.563 [2024-11-08 17:07:01.029623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.563 [2024-11-08 17:07:01.033002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.563 [2024-11-08 17:07:01.033075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.563 [2024-11-08 17:07:01.033188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.563 [2024-11-08 17:07:01.033204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:24.563 { 00:19:24.563 "results": [ 00:19:24.563 { 00:19:24.563 "job": "raid_bdev1", 00:19:24.563 "core_mask": "0x1", 00:19:24.563 "workload": "randrw", 00:19:24.563 "percentage": 50, 00:19:24.563 "status": "finished", 00:19:24.563 "queue_depth": 1, 00:19:24.563 "io_size": 131072, 00:19:24.563 "runtime": 1.294395, 00:19:24.563 "iops": 12972.083483017163, 00:19:24.563 "mibps": 1621.5104353771453, 00:19:24.563 "io_failed": 0, 00:19:24.563 "io_timeout": 0, 00:19:24.563 "avg_latency_us": 73.9300434756715, 00:19:24.563 "min_latency_us": 29.53846153846154, 00:19:24.563 "max_latency_us": 1751.8276923076924 00:19:24.563 } 00:19:24.563 ], 00:19:24.563 "core_count": 1 00:19:24.563 } 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62437 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62437 ']' 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62437 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62437 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:24.563 killing process with pid 62437 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62437' 00:19:24.563 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62437 00:19:24.564 17:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62437 00:19:24.564 [2024-11-08 17:07:01.063465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.564 [2024-11-08 17:07:01.164717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nq4ZXM80FT 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:25.538 00:19:25.538 real 0m3.869s 00:19:25.538 user 0m4.505s 00:19:25.538 sys 0m0.594s 00:19:25.538 ************************************ 00:19:25.538 END TEST raid_read_error_test 00:19:25.538 ************************************ 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:25.538 17:07:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.538 17:07:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:19:25.538 17:07:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:25.538 17:07:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:25.538 17:07:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.538 ************************************ 00:19:25.538 START TEST raid_write_error_test 00:19:25.538 ************************************ 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0o66hJZHsk 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62577 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62577 00:19:25.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62577 ']' 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.538 17:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.538 [2024-11-08 17:07:02.218829] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:25.538 [2024-11-08 17:07:02.219053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62577 ] 00:19:25.809 [2024-11-08 17:07:02.385856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.070 [2024-11-08 17:07:02.551254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.070 [2024-11-08 17:07:02.735804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.070 [2024-11-08 17:07:02.735913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 BaseBdev1_malloc 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 true 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 [2024-11-08 17:07:03.140091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:26.644 [2024-11-08 17:07:03.140182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.644 [2024-11-08 17:07:03.140208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:26.644 [2024-11-08 17:07:03.140222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.644 [2024-11-08 17:07:03.143032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.644 [2024-11-08 17:07:03.143095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:26.644 BaseBdev1 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 BaseBdev2_malloc 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 true 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 [2024-11-08 17:07:03.192912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:26.644 [2024-11-08 17:07:03.192997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.644 [2024-11-08 17:07:03.193018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:26.644 [2024-11-08 17:07:03.193030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.644 [2024-11-08 17:07:03.195737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.644 [2024-11-08 17:07:03.195818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:26.644 BaseBdev2 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 [2024-11-08 17:07:03.201005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.644 [2024-11-08 17:07:03.203395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.644 [2024-11-08 17:07:03.203653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:26.644 [2024-11-08 17:07:03.203672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:26.644 [2024-11-08 17:07:03.204008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:26.644 [2024-11-08 17:07:03.204249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:26.644 [2024-11-08 17:07:03.204262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:26.644 [2024-11-08 17:07:03.204462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.644 "name": "raid_bdev1", 00:19:26.644 "uuid": "71243d48-9501-424b-a626-9f53bc7ab592", 00:19:26.644 "strip_size_kb": 0, 00:19:26.644 "state": "online", 00:19:26.644 "raid_level": "raid1", 00:19:26.644 "superblock": true, 00:19:26.644 "num_base_bdevs": 2, 00:19:26.644 "num_base_bdevs_discovered": 2, 00:19:26.644 "num_base_bdevs_operational": 2, 00:19:26.644 "base_bdevs_list": [ 00:19:26.644 { 00:19:26.644 "name": "BaseBdev1", 00:19:26.644 "uuid": "94ec7ef9-5916-5b84-95be-033ba520411c", 00:19:26.644 "is_configured": true, 00:19:26.644 "data_offset": 2048, 00:19:26.644 "data_size": 63488 00:19:26.644 }, 00:19:26.644 { 00:19:26.644 "name": "BaseBdev2", 00:19:26.644 "uuid": "c1a1bc5d-cd0f-518f-a411-276d665ecff1", 00:19:26.644 "is_configured": true, 00:19:26.644 "data_offset": 2048, 00:19:26.644 "data_size": 63488 00:19:26.644 } 00:19:26.644 ] 00:19:26.644 }' 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.644 17:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.906 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:26.906 17:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:27.165 [2024-11-08 17:07:03.634480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.105 [2024-11-08 17:07:04.543476] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:28.105 [2024-11-08 17:07:04.543576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:28.105 [2024-11-08 17:07:04.543850] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.105 "name": "raid_bdev1", 00:19:28.105 "uuid": "71243d48-9501-424b-a626-9f53bc7ab592", 00:19:28.105 "strip_size_kb": 0, 00:19:28.105 "state": "online", 00:19:28.105 "raid_level": "raid1", 00:19:28.105 "superblock": true, 00:19:28.105 "num_base_bdevs": 2, 00:19:28.105 "num_base_bdevs_discovered": 1, 00:19:28.105 "num_base_bdevs_operational": 1, 00:19:28.105 "base_bdevs_list": [ 00:19:28.105 { 00:19:28.105 "name": null, 00:19:28.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.105 "is_configured": false, 00:19:28.105 "data_offset": 0, 00:19:28.105 "data_size": 63488 00:19:28.105 }, 00:19:28.105 { 00:19:28.105 "name": "BaseBdev2", 00:19:28.105 "uuid": "c1a1bc5d-cd0f-518f-a411-276d665ecff1", 00:19:28.105 "is_configured": true, 00:19:28.105 "data_offset": 2048, 00:19:28.105 "data_size": 63488 00:19:28.105 } 00:19:28.105 ] 00:19:28.105 }' 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.105 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.366 [2024-11-08 17:07:04.889234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:28.366 [2024-11-08 17:07:04.889295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.366 [2024-11-08 17:07:04.892498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.366 [2024-11-08 17:07:04.892566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.366 [2024-11-08 17:07:04.892651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.366 [2024-11-08 17:07:04.892668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:28.366 { 00:19:28.366 "results": [ 00:19:28.366 { 00:19:28.366 "job": "raid_bdev1", 00:19:28.366 "core_mask": "0x1", 00:19:28.366 "workload": "randrw", 00:19:28.366 "percentage": 50, 00:19:28.366 "status": "finished", 00:19:28.366 "queue_depth": 1, 00:19:28.366 "io_size": 131072, 00:19:28.366 "runtime": 1.252114, 00:19:28.366 "iops": 14795.777381292757, 00:19:28.366 "mibps": 1849.4721726615946, 00:19:28.366 "io_failed": 0, 00:19:28.366 "io_timeout": 0, 00:19:28.366 "avg_latency_us": 64.37208746128103, 00:19:28.366 "min_latency_us": 28.553846153846155, 00:19:28.366 "max_latency_us": 1764.4307692307693 00:19:28.366 } 00:19:28.366 ], 00:19:28.366 "core_count": 1 00:19:28.366 } 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62577 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62577 ']' 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62577 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62577 00:19:28.366 killing process with pid 62577 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62577' 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62577 00:19:28.366 17:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62577 00:19:28.366 [2024-11-08 17:07:04.928407] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.366 [2024-11-08 17:07:05.034049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0o66hJZHsk 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:29.316 00:19:29.316 real 0m3.857s 00:19:29.316 user 0m4.450s 00:19:29.316 sys 0m0.594s 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:29.316 ************************************ 00:19:29.316 END TEST raid_write_error_test 00:19:29.316 ************************************ 00:19:29.316 17:07:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.576 17:07:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:19:29.576 17:07:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:29.576 17:07:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:19:29.576 17:07:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:29.576 17:07:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:29.576 17:07:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.576 ************************************ 00:19:29.576 START TEST raid_state_function_test 00:19:29.576 ************************************ 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:29.576 Process raid pid: 62710 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62710 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62710' 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62710 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62710 ']' 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:29.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:29.576 17:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.576 [2024-11-08 17:07:06.136745] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:29.576 [2024-11-08 17:07:06.136917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.837 [2024-11-08 17:07:06.308588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.837 [2024-11-08 17:07:06.476975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.097 [2024-11-08 17:07:06.664365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.097 [2024-11-08 17:07:06.664451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.357 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:30.357 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:19:30.357 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:30.357 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.357 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.357 [2024-11-08 17:07:07.037129] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.358 [2024-11-08 17:07:07.037213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.358 [2024-11-08 17:07:07.037227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.358 [2024-11-08 17:07:07.037239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.358 [2024-11-08 17:07:07.037247] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.358 [2024-11-08 17:07:07.037259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.358 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.618 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.618 "name": "Existed_Raid", 00:19:30.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.618 "strip_size_kb": 64, 00:19:30.618 "state": "configuring", 00:19:30.618 "raid_level": "raid0", 00:19:30.618 "superblock": false, 00:19:30.618 "num_base_bdevs": 3, 00:19:30.618 "num_base_bdevs_discovered": 0, 00:19:30.618 "num_base_bdevs_operational": 3, 00:19:30.618 "base_bdevs_list": [ 00:19:30.618 { 00:19:30.618 "name": "BaseBdev1", 00:19:30.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.618 "is_configured": false, 00:19:30.618 "data_offset": 0, 00:19:30.618 "data_size": 0 00:19:30.618 }, 00:19:30.618 { 00:19:30.618 "name": "BaseBdev2", 00:19:30.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.618 "is_configured": false, 00:19:30.618 "data_offset": 0, 00:19:30.618 "data_size": 0 00:19:30.618 }, 00:19:30.618 { 00:19:30.618 "name": "BaseBdev3", 00:19:30.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.618 "is_configured": false, 00:19:30.618 "data_offset": 0, 00:19:30.618 "data_size": 0 00:19:30.618 } 00:19:30.618 ] 00:19:30.618 }' 00:19:30.618 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.618 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.879 [2024-11-08 17:07:07.369176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.879 [2024-11-08 17:07:07.369244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.879 [2024-11-08 17:07:07.377184] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.879 [2024-11-08 17:07:07.377256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.879 [2024-11-08 17:07:07.377267] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.879 [2024-11-08 17:07:07.377278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.879 [2024-11-08 17:07:07.377284] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.879 [2024-11-08 17:07:07.377294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.879 [2024-11-08 17:07:07.419659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.879 BaseBdev1 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.879 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.880 [ 00:19:30.880 { 00:19:30.880 "name": "BaseBdev1", 00:19:30.880 "aliases": [ 00:19:30.880 "ab229085-e318-40b1-a937-0e4bc365396e" 00:19:30.880 ], 00:19:30.880 "product_name": "Malloc disk", 00:19:30.880 "block_size": 512, 00:19:30.880 "num_blocks": 65536, 00:19:30.880 "uuid": "ab229085-e318-40b1-a937-0e4bc365396e", 00:19:30.880 "assigned_rate_limits": { 00:19:30.880 "rw_ios_per_sec": 0, 00:19:30.880 "rw_mbytes_per_sec": 0, 00:19:30.880 "r_mbytes_per_sec": 0, 00:19:30.880 "w_mbytes_per_sec": 0 00:19:30.880 }, 00:19:30.880 "claimed": true, 00:19:30.880 "claim_type": "exclusive_write", 00:19:30.880 "zoned": false, 00:19:30.880 "supported_io_types": { 00:19:30.880 "read": true, 00:19:30.880 "write": true, 00:19:30.880 "unmap": true, 00:19:30.880 "flush": true, 00:19:30.880 "reset": true, 00:19:30.880 "nvme_admin": false, 00:19:30.880 "nvme_io": false, 00:19:30.880 "nvme_io_md": false, 00:19:30.880 "write_zeroes": true, 00:19:30.880 "zcopy": true, 00:19:30.880 "get_zone_info": false, 00:19:30.880 "zone_management": false, 00:19:30.880 "zone_append": false, 00:19:30.880 "compare": false, 00:19:30.880 "compare_and_write": false, 00:19:30.880 "abort": true, 00:19:30.880 "seek_hole": false, 00:19:30.880 "seek_data": false, 00:19:30.880 "copy": true, 00:19:30.880 "nvme_iov_md": false 00:19:30.880 }, 00:19:30.880 "memory_domains": [ 00:19:30.880 { 00:19:30.880 "dma_device_id": "system", 00:19:30.880 "dma_device_type": 1 00:19:30.880 }, 00:19:30.880 { 00:19:30.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.880 "dma_device_type": 2 00:19:30.880 } 00:19:30.880 ], 00:19:30.880 "driver_specific": {} 00:19:30.880 } 00:19:30.880 ] 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.880 "name": "Existed_Raid", 00:19:30.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.880 "strip_size_kb": 64, 00:19:30.880 "state": "configuring", 00:19:30.880 "raid_level": "raid0", 00:19:30.880 "superblock": false, 00:19:30.880 "num_base_bdevs": 3, 00:19:30.880 "num_base_bdevs_discovered": 1, 00:19:30.880 "num_base_bdevs_operational": 3, 00:19:30.880 "base_bdevs_list": [ 00:19:30.880 { 00:19:30.880 "name": "BaseBdev1", 00:19:30.880 "uuid": "ab229085-e318-40b1-a937-0e4bc365396e", 00:19:30.880 "is_configured": true, 00:19:30.880 "data_offset": 0, 00:19:30.880 "data_size": 65536 00:19:30.880 }, 00:19:30.880 { 00:19:30.880 "name": "BaseBdev2", 00:19:30.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.880 "is_configured": false, 00:19:30.880 "data_offset": 0, 00:19:30.880 "data_size": 0 00:19:30.880 }, 00:19:30.880 { 00:19:30.880 "name": "BaseBdev3", 00:19:30.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.880 "is_configured": false, 00:19:30.880 "data_offset": 0, 00:19:30.880 "data_size": 0 00:19:30.880 } 00:19:30.880 ] 00:19:30.880 }' 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.880 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.143 [2024-11-08 17:07:07.783855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.143 [2024-11-08 17:07:07.783958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.143 [2024-11-08 17:07:07.791919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.143 [2024-11-08 17:07:07.794423] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.143 [2024-11-08 17:07:07.794491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.143 [2024-11-08 17:07:07.794503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:31.143 [2024-11-08 17:07:07.794514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.143 "name": "Existed_Raid", 00:19:31.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.143 "strip_size_kb": 64, 00:19:31.143 "state": "configuring", 00:19:31.143 "raid_level": "raid0", 00:19:31.143 "superblock": false, 00:19:31.143 "num_base_bdevs": 3, 00:19:31.143 "num_base_bdevs_discovered": 1, 00:19:31.143 "num_base_bdevs_operational": 3, 00:19:31.143 "base_bdevs_list": [ 00:19:31.143 { 00:19:31.143 "name": "BaseBdev1", 00:19:31.143 "uuid": "ab229085-e318-40b1-a937-0e4bc365396e", 00:19:31.143 "is_configured": true, 00:19:31.143 "data_offset": 0, 00:19:31.143 "data_size": 65536 00:19:31.143 }, 00:19:31.143 { 00:19:31.143 "name": "BaseBdev2", 00:19:31.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.143 "is_configured": false, 00:19:31.143 "data_offset": 0, 00:19:31.143 "data_size": 0 00:19:31.143 }, 00:19:31.143 { 00:19:31.143 "name": "BaseBdev3", 00:19:31.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.143 "is_configured": false, 00:19:31.143 "data_offset": 0, 00:19:31.143 "data_size": 0 00:19:31.143 } 00:19:31.143 ] 00:19:31.143 }' 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.143 17:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.404 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:31.404 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.404 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.665 [2024-11-08 17:07:08.147742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.665 BaseBdev2 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.665 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.665 [ 00:19:31.665 { 00:19:31.665 "name": "BaseBdev2", 00:19:31.665 "aliases": [ 00:19:31.665 "7939074a-c7aa-445e-8bf2-db5c6803dcb7" 00:19:31.665 ], 00:19:31.665 "product_name": "Malloc disk", 00:19:31.665 "block_size": 512, 00:19:31.665 "num_blocks": 65536, 00:19:31.665 "uuid": "7939074a-c7aa-445e-8bf2-db5c6803dcb7", 00:19:31.665 "assigned_rate_limits": { 00:19:31.665 "rw_ios_per_sec": 0, 00:19:31.665 "rw_mbytes_per_sec": 0, 00:19:31.666 "r_mbytes_per_sec": 0, 00:19:31.666 "w_mbytes_per_sec": 0 00:19:31.666 }, 00:19:31.666 "claimed": true, 00:19:31.666 "claim_type": "exclusive_write", 00:19:31.666 "zoned": false, 00:19:31.666 "supported_io_types": { 00:19:31.666 "read": true, 00:19:31.666 "write": true, 00:19:31.666 "unmap": true, 00:19:31.666 "flush": true, 00:19:31.666 "reset": true, 00:19:31.666 "nvme_admin": false, 00:19:31.666 "nvme_io": false, 00:19:31.666 "nvme_io_md": false, 00:19:31.666 "write_zeroes": true, 00:19:31.666 "zcopy": true, 00:19:31.666 "get_zone_info": false, 00:19:31.666 "zone_management": false, 00:19:31.666 "zone_append": false, 00:19:31.666 "compare": false, 00:19:31.666 "compare_and_write": false, 00:19:31.666 "abort": true, 00:19:31.666 "seek_hole": false, 00:19:31.666 "seek_data": false, 00:19:31.666 "copy": true, 00:19:31.666 "nvme_iov_md": false 00:19:31.666 }, 00:19:31.666 "memory_domains": [ 00:19:31.666 { 00:19:31.666 "dma_device_id": "system", 00:19:31.666 "dma_device_type": 1 00:19:31.666 }, 00:19:31.666 { 00:19:31.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.666 "dma_device_type": 2 00:19:31.666 } 00:19:31.666 ], 00:19:31.666 "driver_specific": {} 00:19:31.666 } 00:19:31.666 ] 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.666 "name": "Existed_Raid", 00:19:31.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.666 "strip_size_kb": 64, 00:19:31.666 "state": "configuring", 00:19:31.666 "raid_level": "raid0", 00:19:31.666 "superblock": false, 00:19:31.666 "num_base_bdevs": 3, 00:19:31.666 "num_base_bdevs_discovered": 2, 00:19:31.666 "num_base_bdevs_operational": 3, 00:19:31.666 "base_bdevs_list": [ 00:19:31.666 { 00:19:31.666 "name": "BaseBdev1", 00:19:31.666 "uuid": "ab229085-e318-40b1-a937-0e4bc365396e", 00:19:31.666 "is_configured": true, 00:19:31.666 "data_offset": 0, 00:19:31.666 "data_size": 65536 00:19:31.666 }, 00:19:31.666 { 00:19:31.666 "name": "BaseBdev2", 00:19:31.666 "uuid": "7939074a-c7aa-445e-8bf2-db5c6803dcb7", 00:19:31.666 "is_configured": true, 00:19:31.666 "data_offset": 0, 00:19:31.666 "data_size": 65536 00:19:31.666 }, 00:19:31.666 { 00:19:31.666 "name": "BaseBdev3", 00:19:31.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.666 "is_configured": false, 00:19:31.666 "data_offset": 0, 00:19:31.666 "data_size": 0 00:19:31.666 } 00:19:31.666 ] 00:19:31.666 }' 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.666 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.927 [2024-11-08 17:07:08.536239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.927 [2024-11-08 17:07:08.536313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:31.927 [2024-11-08 17:07:08.536331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:31.927 [2024-11-08 17:07:08.536669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:31.927 [2024-11-08 17:07:08.536875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:31.927 [2024-11-08 17:07:08.536888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:31.927 [2024-11-08 17:07:08.537199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.927 BaseBdev3 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.927 [ 00:19:31.927 { 00:19:31.927 "name": "BaseBdev3", 00:19:31.927 "aliases": [ 00:19:31.927 "5b23f9df-bc90-4cfa-92a0-e5ceea4113df" 00:19:31.927 ], 00:19:31.927 "product_name": "Malloc disk", 00:19:31.927 "block_size": 512, 00:19:31.927 "num_blocks": 65536, 00:19:31.927 "uuid": "5b23f9df-bc90-4cfa-92a0-e5ceea4113df", 00:19:31.927 "assigned_rate_limits": { 00:19:31.927 "rw_ios_per_sec": 0, 00:19:31.927 "rw_mbytes_per_sec": 0, 00:19:31.927 "r_mbytes_per_sec": 0, 00:19:31.927 "w_mbytes_per_sec": 0 00:19:31.927 }, 00:19:31.927 "claimed": true, 00:19:31.927 "claim_type": "exclusive_write", 00:19:31.927 "zoned": false, 00:19:31.927 "supported_io_types": { 00:19:31.927 "read": true, 00:19:31.927 "write": true, 00:19:31.927 "unmap": true, 00:19:31.927 "flush": true, 00:19:31.927 "reset": true, 00:19:31.927 "nvme_admin": false, 00:19:31.927 "nvme_io": false, 00:19:31.927 "nvme_io_md": false, 00:19:31.927 "write_zeroes": true, 00:19:31.927 "zcopy": true, 00:19:31.927 "get_zone_info": false, 00:19:31.927 "zone_management": false, 00:19:31.927 "zone_append": false, 00:19:31.927 "compare": false, 00:19:31.927 "compare_and_write": false, 00:19:31.927 "abort": true, 00:19:31.927 "seek_hole": false, 00:19:31.927 "seek_data": false, 00:19:31.927 "copy": true, 00:19:31.927 "nvme_iov_md": false 00:19:31.927 }, 00:19:31.927 "memory_domains": [ 00:19:31.927 { 00:19:31.927 "dma_device_id": "system", 00:19:31.927 "dma_device_type": 1 00:19:31.927 }, 00:19:31.927 { 00:19:31.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.927 "dma_device_type": 2 00:19:31.927 } 00:19:31.927 ], 00:19:31.927 "driver_specific": {} 00:19:31.927 } 00:19:31.927 ] 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.927 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.927 "name": "Existed_Raid", 00:19:31.927 "uuid": "a0ac67e2-740e-4303-a8d2-b03e8cff0b58", 00:19:31.927 "strip_size_kb": 64, 00:19:31.927 "state": "online", 00:19:31.927 "raid_level": "raid0", 00:19:31.927 "superblock": false, 00:19:31.927 "num_base_bdevs": 3, 00:19:31.927 "num_base_bdevs_discovered": 3, 00:19:31.927 "num_base_bdevs_operational": 3, 00:19:31.927 "base_bdevs_list": [ 00:19:31.927 { 00:19:31.927 "name": "BaseBdev1", 00:19:31.928 "uuid": "ab229085-e318-40b1-a937-0e4bc365396e", 00:19:31.928 "is_configured": true, 00:19:31.928 "data_offset": 0, 00:19:31.928 "data_size": 65536 00:19:31.928 }, 00:19:31.928 { 00:19:31.928 "name": "BaseBdev2", 00:19:31.928 "uuid": "7939074a-c7aa-445e-8bf2-db5c6803dcb7", 00:19:31.928 "is_configured": true, 00:19:31.928 "data_offset": 0, 00:19:31.928 "data_size": 65536 00:19:31.928 }, 00:19:31.928 { 00:19:31.928 "name": "BaseBdev3", 00:19:31.928 "uuid": "5b23f9df-bc90-4cfa-92a0-e5ceea4113df", 00:19:31.928 "is_configured": true, 00:19:31.928 "data_offset": 0, 00:19:31.928 "data_size": 65536 00:19:31.928 } 00:19:31.928 ] 00:19:31.928 }' 00:19:31.928 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.928 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.189 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.189 [2024-11-08 17:07:08.896839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.450 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.450 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.450 "name": "Existed_Raid", 00:19:32.450 "aliases": [ 00:19:32.450 "a0ac67e2-740e-4303-a8d2-b03e8cff0b58" 00:19:32.450 ], 00:19:32.450 "product_name": "Raid Volume", 00:19:32.450 "block_size": 512, 00:19:32.450 "num_blocks": 196608, 00:19:32.450 "uuid": "a0ac67e2-740e-4303-a8d2-b03e8cff0b58", 00:19:32.450 "assigned_rate_limits": { 00:19:32.450 "rw_ios_per_sec": 0, 00:19:32.450 "rw_mbytes_per_sec": 0, 00:19:32.450 "r_mbytes_per_sec": 0, 00:19:32.450 "w_mbytes_per_sec": 0 00:19:32.450 }, 00:19:32.450 "claimed": false, 00:19:32.450 "zoned": false, 00:19:32.450 "supported_io_types": { 00:19:32.450 "read": true, 00:19:32.450 "write": true, 00:19:32.450 "unmap": true, 00:19:32.450 "flush": true, 00:19:32.450 "reset": true, 00:19:32.450 "nvme_admin": false, 00:19:32.450 "nvme_io": false, 00:19:32.450 "nvme_io_md": false, 00:19:32.450 "write_zeroes": true, 00:19:32.450 "zcopy": false, 00:19:32.450 "get_zone_info": false, 00:19:32.450 "zone_management": false, 00:19:32.450 "zone_append": false, 00:19:32.450 "compare": false, 00:19:32.450 "compare_and_write": false, 00:19:32.450 "abort": false, 00:19:32.450 "seek_hole": false, 00:19:32.450 "seek_data": false, 00:19:32.450 "copy": false, 00:19:32.450 "nvme_iov_md": false 00:19:32.450 }, 00:19:32.451 "memory_domains": [ 00:19:32.451 { 00:19:32.451 "dma_device_id": "system", 00:19:32.451 "dma_device_type": 1 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.451 "dma_device_type": 2 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "dma_device_id": "system", 00:19:32.451 "dma_device_type": 1 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.451 "dma_device_type": 2 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "dma_device_id": "system", 00:19:32.451 "dma_device_type": 1 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.451 "dma_device_type": 2 00:19:32.451 } 00:19:32.451 ], 00:19:32.451 "driver_specific": { 00:19:32.451 "raid": { 00:19:32.451 "uuid": "a0ac67e2-740e-4303-a8d2-b03e8cff0b58", 00:19:32.451 "strip_size_kb": 64, 00:19:32.451 "state": "online", 00:19:32.451 "raid_level": "raid0", 00:19:32.451 "superblock": false, 00:19:32.451 "num_base_bdevs": 3, 00:19:32.451 "num_base_bdevs_discovered": 3, 00:19:32.451 "num_base_bdevs_operational": 3, 00:19:32.451 "base_bdevs_list": [ 00:19:32.451 { 00:19:32.451 "name": "BaseBdev1", 00:19:32.451 "uuid": "ab229085-e318-40b1-a937-0e4bc365396e", 00:19:32.451 "is_configured": true, 00:19:32.451 "data_offset": 0, 00:19:32.451 "data_size": 65536 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "name": "BaseBdev2", 00:19:32.451 "uuid": "7939074a-c7aa-445e-8bf2-db5c6803dcb7", 00:19:32.451 "is_configured": true, 00:19:32.451 "data_offset": 0, 00:19:32.451 "data_size": 65536 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "name": "BaseBdev3", 00:19:32.451 "uuid": "5b23f9df-bc90-4cfa-92a0-e5ceea4113df", 00:19:32.451 "is_configured": true, 00:19:32.451 "data_offset": 0, 00:19:32.451 "data_size": 65536 00:19:32.451 } 00:19:32.451 ] 00:19:32.451 } 00:19:32.451 } 00:19:32.451 }' 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:32.451 BaseBdev2 00:19:32.451 BaseBdev3' 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.451 17:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.451 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.451 [2024-11-08 17:07:09.104539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.451 [2024-11-08 17:07:09.104589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.451 [2024-11-08 17:07:09.104665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.714 "name": "Existed_Raid", 00:19:32.714 "uuid": "a0ac67e2-740e-4303-a8d2-b03e8cff0b58", 00:19:32.714 "strip_size_kb": 64, 00:19:32.714 "state": "offline", 00:19:32.714 "raid_level": "raid0", 00:19:32.714 "superblock": false, 00:19:32.714 "num_base_bdevs": 3, 00:19:32.714 "num_base_bdevs_discovered": 2, 00:19:32.714 "num_base_bdevs_operational": 2, 00:19:32.714 "base_bdevs_list": [ 00:19:32.714 { 00:19:32.714 "name": null, 00:19:32.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.714 "is_configured": false, 00:19:32.714 "data_offset": 0, 00:19:32.714 "data_size": 65536 00:19:32.714 }, 00:19:32.714 { 00:19:32.714 "name": "BaseBdev2", 00:19:32.714 "uuid": "7939074a-c7aa-445e-8bf2-db5c6803dcb7", 00:19:32.714 "is_configured": true, 00:19:32.714 "data_offset": 0, 00:19:32.714 "data_size": 65536 00:19:32.714 }, 00:19:32.714 { 00:19:32.714 "name": "BaseBdev3", 00:19:32.714 "uuid": "5b23f9df-bc90-4cfa-92a0-e5ceea4113df", 00:19:32.714 "is_configured": true, 00:19:32.714 "data_offset": 0, 00:19:32.714 "data_size": 65536 00:19:32.714 } 00:19:32.714 ] 00:19:32.714 }' 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.714 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.975 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:32.975 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:32.975 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.975 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.976 [2024-11-08 17:07:09.516535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.976 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.976 [2024-11-08 17:07:09.631261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:32.976 [2024-11-08 17:07:09.631521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:33.236 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 BaseBdev2 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 [ 00:19:33.237 { 00:19:33.237 "name": "BaseBdev2", 00:19:33.237 "aliases": [ 00:19:33.237 "1af9aefe-6b96-4fd4-9884-e569dd7a3c26" 00:19:33.237 ], 00:19:33.237 "product_name": "Malloc disk", 00:19:33.237 "block_size": 512, 00:19:33.237 "num_blocks": 65536, 00:19:33.237 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:33.237 "assigned_rate_limits": { 00:19:33.237 "rw_ios_per_sec": 0, 00:19:33.237 "rw_mbytes_per_sec": 0, 00:19:33.237 "r_mbytes_per_sec": 0, 00:19:33.237 "w_mbytes_per_sec": 0 00:19:33.237 }, 00:19:33.237 "claimed": false, 00:19:33.237 "zoned": false, 00:19:33.237 "supported_io_types": { 00:19:33.237 "read": true, 00:19:33.237 "write": true, 00:19:33.237 "unmap": true, 00:19:33.237 "flush": true, 00:19:33.237 "reset": true, 00:19:33.237 "nvme_admin": false, 00:19:33.237 "nvme_io": false, 00:19:33.237 "nvme_io_md": false, 00:19:33.237 "write_zeroes": true, 00:19:33.237 "zcopy": true, 00:19:33.237 "get_zone_info": false, 00:19:33.237 "zone_management": false, 00:19:33.237 "zone_append": false, 00:19:33.237 "compare": false, 00:19:33.237 "compare_and_write": false, 00:19:33.237 "abort": true, 00:19:33.237 "seek_hole": false, 00:19:33.237 "seek_data": false, 00:19:33.237 "copy": true, 00:19:33.237 "nvme_iov_md": false 00:19:33.237 }, 00:19:33.237 "memory_domains": [ 00:19:33.237 { 00:19:33.237 "dma_device_id": "system", 00:19:33.237 "dma_device_type": 1 00:19:33.237 }, 00:19:33.237 { 00:19:33.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.237 "dma_device_type": 2 00:19:33.237 } 00:19:33.237 ], 00:19:33.237 "driver_specific": {} 00:19:33.237 } 00:19:33.237 ] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 BaseBdev3 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 [ 00:19:33.237 { 00:19:33.237 "name": "BaseBdev3", 00:19:33.237 "aliases": [ 00:19:33.237 "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7" 00:19:33.237 ], 00:19:33.237 "product_name": "Malloc disk", 00:19:33.237 "block_size": 512, 00:19:33.237 "num_blocks": 65536, 00:19:33.237 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:33.237 "assigned_rate_limits": { 00:19:33.237 "rw_ios_per_sec": 0, 00:19:33.237 "rw_mbytes_per_sec": 0, 00:19:33.237 "r_mbytes_per_sec": 0, 00:19:33.237 "w_mbytes_per_sec": 0 00:19:33.237 }, 00:19:33.237 "claimed": false, 00:19:33.237 "zoned": false, 00:19:33.237 "supported_io_types": { 00:19:33.237 "read": true, 00:19:33.237 "write": true, 00:19:33.237 "unmap": true, 00:19:33.237 "flush": true, 00:19:33.237 "reset": true, 00:19:33.237 "nvme_admin": false, 00:19:33.237 "nvme_io": false, 00:19:33.237 "nvme_io_md": false, 00:19:33.237 "write_zeroes": true, 00:19:33.237 "zcopy": true, 00:19:33.237 "get_zone_info": false, 00:19:33.237 "zone_management": false, 00:19:33.237 "zone_append": false, 00:19:33.237 "compare": false, 00:19:33.237 "compare_and_write": false, 00:19:33.237 "abort": true, 00:19:33.237 "seek_hole": false, 00:19:33.237 "seek_data": false, 00:19:33.237 "copy": true, 00:19:33.237 "nvme_iov_md": false 00:19:33.237 }, 00:19:33.237 "memory_domains": [ 00:19:33.237 { 00:19:33.237 "dma_device_id": "system", 00:19:33.237 "dma_device_type": 1 00:19:33.237 }, 00:19:33.237 { 00:19:33.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.237 "dma_device_type": 2 00:19:33.237 } 00:19:33.237 ], 00:19:33.237 "driver_specific": {} 00:19:33.237 } 00:19:33.237 ] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 [2024-11-08 17:07:09.877809] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:33.237 [2024-11-08 17:07:09.878059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:33.237 [2024-11-08 17:07:09.878111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.237 [2024-11-08 17:07:09.880521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.237 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.237 "name": "Existed_Raid", 00:19:33.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.238 "strip_size_kb": 64, 00:19:33.238 "state": "configuring", 00:19:33.238 "raid_level": "raid0", 00:19:33.238 "superblock": false, 00:19:33.238 "num_base_bdevs": 3, 00:19:33.238 "num_base_bdevs_discovered": 2, 00:19:33.238 "num_base_bdevs_operational": 3, 00:19:33.238 "base_bdevs_list": [ 00:19:33.238 { 00:19:33.238 "name": "BaseBdev1", 00:19:33.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.238 "is_configured": false, 00:19:33.238 "data_offset": 0, 00:19:33.238 "data_size": 0 00:19:33.238 }, 00:19:33.238 { 00:19:33.238 "name": "BaseBdev2", 00:19:33.238 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:33.238 "is_configured": true, 00:19:33.238 "data_offset": 0, 00:19:33.238 "data_size": 65536 00:19:33.238 }, 00:19:33.238 { 00:19:33.238 "name": "BaseBdev3", 00:19:33.238 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:33.238 "is_configured": true, 00:19:33.238 "data_offset": 0, 00:19:33.238 "data_size": 65536 00:19:33.238 } 00:19:33.238 ] 00:19:33.238 }' 00:19:33.238 17:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.238 17:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.499 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:33.499 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.499 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.760 [2024-11-08 17:07:10.213916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.760 "name": "Existed_Raid", 00:19:33.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.760 "strip_size_kb": 64, 00:19:33.760 "state": "configuring", 00:19:33.760 "raid_level": "raid0", 00:19:33.760 "superblock": false, 00:19:33.760 "num_base_bdevs": 3, 00:19:33.760 "num_base_bdevs_discovered": 1, 00:19:33.760 "num_base_bdevs_operational": 3, 00:19:33.760 "base_bdevs_list": [ 00:19:33.760 { 00:19:33.760 "name": "BaseBdev1", 00:19:33.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.760 "is_configured": false, 00:19:33.760 "data_offset": 0, 00:19:33.760 "data_size": 0 00:19:33.760 }, 00:19:33.760 { 00:19:33.760 "name": null, 00:19:33.760 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:33.760 "is_configured": false, 00:19:33.760 "data_offset": 0, 00:19:33.760 "data_size": 65536 00:19:33.760 }, 00:19:33.760 { 00:19:33.760 "name": "BaseBdev3", 00:19:33.760 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:33.760 "is_configured": true, 00:19:33.760 "data_offset": 0, 00:19:33.760 "data_size": 65536 00:19:33.760 } 00:19:33.760 ] 00:19:33.760 }' 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.760 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.020 [2024-11-08 17:07:10.584921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.020 BaseBdev1 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.020 [ 00:19:34.020 { 00:19:34.020 "name": "BaseBdev1", 00:19:34.020 "aliases": [ 00:19:34.020 "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0" 00:19:34.020 ], 00:19:34.020 "product_name": "Malloc disk", 00:19:34.020 "block_size": 512, 00:19:34.020 "num_blocks": 65536, 00:19:34.020 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:34.020 "assigned_rate_limits": { 00:19:34.020 "rw_ios_per_sec": 0, 00:19:34.020 "rw_mbytes_per_sec": 0, 00:19:34.020 "r_mbytes_per_sec": 0, 00:19:34.020 "w_mbytes_per_sec": 0 00:19:34.020 }, 00:19:34.020 "claimed": true, 00:19:34.020 "claim_type": "exclusive_write", 00:19:34.020 "zoned": false, 00:19:34.020 "supported_io_types": { 00:19:34.020 "read": true, 00:19:34.020 "write": true, 00:19:34.020 "unmap": true, 00:19:34.020 "flush": true, 00:19:34.020 "reset": true, 00:19:34.020 "nvme_admin": false, 00:19:34.020 "nvme_io": false, 00:19:34.020 "nvme_io_md": false, 00:19:34.020 "write_zeroes": true, 00:19:34.020 "zcopy": true, 00:19:34.020 "get_zone_info": false, 00:19:34.020 "zone_management": false, 00:19:34.020 "zone_append": false, 00:19:34.020 "compare": false, 00:19:34.020 "compare_and_write": false, 00:19:34.020 "abort": true, 00:19:34.020 "seek_hole": false, 00:19:34.020 "seek_data": false, 00:19:34.020 "copy": true, 00:19:34.020 "nvme_iov_md": false 00:19:34.020 }, 00:19:34.020 "memory_domains": [ 00:19:34.020 { 00:19:34.020 "dma_device_id": "system", 00:19:34.020 "dma_device_type": 1 00:19:34.020 }, 00:19:34.020 { 00:19:34.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.020 "dma_device_type": 2 00:19:34.020 } 00:19:34.020 ], 00:19:34.020 "driver_specific": {} 00:19:34.020 } 00:19:34.020 ] 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.020 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.021 "name": "Existed_Raid", 00:19:34.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.021 "strip_size_kb": 64, 00:19:34.021 "state": "configuring", 00:19:34.021 "raid_level": "raid0", 00:19:34.021 "superblock": false, 00:19:34.021 "num_base_bdevs": 3, 00:19:34.021 "num_base_bdevs_discovered": 2, 00:19:34.021 "num_base_bdevs_operational": 3, 00:19:34.021 "base_bdevs_list": [ 00:19:34.021 { 00:19:34.021 "name": "BaseBdev1", 00:19:34.021 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:34.021 "is_configured": true, 00:19:34.021 "data_offset": 0, 00:19:34.021 "data_size": 65536 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "name": null, 00:19:34.021 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:34.021 "is_configured": false, 00:19:34.021 "data_offset": 0, 00:19:34.021 "data_size": 65536 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "name": "BaseBdev3", 00:19:34.021 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:34.021 "is_configured": true, 00:19:34.021 "data_offset": 0, 00:19:34.021 "data_size": 65536 00:19:34.021 } 00:19:34.021 ] 00:19:34.021 }' 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.021 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.282 [2024-11-08 17:07:10.985089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.282 17:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.542 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.542 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.542 "name": "Existed_Raid", 00:19:34.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.542 "strip_size_kb": 64, 00:19:34.542 "state": "configuring", 00:19:34.542 "raid_level": "raid0", 00:19:34.542 "superblock": false, 00:19:34.542 "num_base_bdevs": 3, 00:19:34.542 "num_base_bdevs_discovered": 1, 00:19:34.542 "num_base_bdevs_operational": 3, 00:19:34.542 "base_bdevs_list": [ 00:19:34.542 { 00:19:34.542 "name": "BaseBdev1", 00:19:34.542 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:34.542 "is_configured": true, 00:19:34.542 "data_offset": 0, 00:19:34.542 "data_size": 65536 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "name": null, 00:19:34.542 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:34.542 "is_configured": false, 00:19:34.542 "data_offset": 0, 00:19:34.542 "data_size": 65536 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "name": null, 00:19:34.542 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:34.542 "is_configured": false, 00:19:34.542 "data_offset": 0, 00:19:34.542 "data_size": 65536 00:19:34.542 } 00:19:34.542 ] 00:19:34.542 }' 00:19:34.542 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.542 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.804 [2024-11-08 17:07:11.357233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.804 "name": "Existed_Raid", 00:19:34.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.804 "strip_size_kb": 64, 00:19:34.804 "state": "configuring", 00:19:34.804 "raid_level": "raid0", 00:19:34.804 "superblock": false, 00:19:34.804 "num_base_bdevs": 3, 00:19:34.804 "num_base_bdevs_discovered": 2, 00:19:34.804 "num_base_bdevs_operational": 3, 00:19:34.804 "base_bdevs_list": [ 00:19:34.804 { 00:19:34.804 "name": "BaseBdev1", 00:19:34.804 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:34.804 "is_configured": true, 00:19:34.804 "data_offset": 0, 00:19:34.804 "data_size": 65536 00:19:34.804 }, 00:19:34.804 { 00:19:34.804 "name": null, 00:19:34.804 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:34.804 "is_configured": false, 00:19:34.804 "data_offset": 0, 00:19:34.804 "data_size": 65536 00:19:34.804 }, 00:19:34.804 { 00:19:34.804 "name": "BaseBdev3", 00:19:34.804 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:34.804 "is_configured": true, 00:19:34.804 "data_offset": 0, 00:19:34.804 "data_size": 65536 00:19:34.804 } 00:19:34.804 ] 00:19:34.804 }' 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.804 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.064 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.064 [2024-11-08 17:07:11.713349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.325 "name": "Existed_Raid", 00:19:35.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.325 "strip_size_kb": 64, 00:19:35.325 "state": "configuring", 00:19:35.325 "raid_level": "raid0", 00:19:35.325 "superblock": false, 00:19:35.325 "num_base_bdevs": 3, 00:19:35.325 "num_base_bdevs_discovered": 1, 00:19:35.325 "num_base_bdevs_operational": 3, 00:19:35.325 "base_bdevs_list": [ 00:19:35.325 { 00:19:35.325 "name": null, 00:19:35.325 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:35.325 "is_configured": false, 00:19:35.325 "data_offset": 0, 00:19:35.325 "data_size": 65536 00:19:35.325 }, 00:19:35.325 { 00:19:35.325 "name": null, 00:19:35.325 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:35.325 "is_configured": false, 00:19:35.325 "data_offset": 0, 00:19:35.325 "data_size": 65536 00:19:35.325 }, 00:19:35.325 { 00:19:35.325 "name": "BaseBdev3", 00:19:35.325 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:35.325 "is_configured": true, 00:19:35.325 "data_offset": 0, 00:19:35.325 "data_size": 65536 00:19:35.325 } 00:19:35.325 ] 00:19:35.325 }' 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.325 17:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.586 [2024-11-08 17:07:12.141347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.586 "name": "Existed_Raid", 00:19:35.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.586 "strip_size_kb": 64, 00:19:35.586 "state": "configuring", 00:19:35.586 "raid_level": "raid0", 00:19:35.586 "superblock": false, 00:19:35.586 "num_base_bdevs": 3, 00:19:35.586 "num_base_bdevs_discovered": 2, 00:19:35.586 "num_base_bdevs_operational": 3, 00:19:35.586 "base_bdevs_list": [ 00:19:35.586 { 00:19:35.586 "name": null, 00:19:35.586 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:35.586 "is_configured": false, 00:19:35.586 "data_offset": 0, 00:19:35.586 "data_size": 65536 00:19:35.586 }, 00:19:35.586 { 00:19:35.586 "name": "BaseBdev2", 00:19:35.586 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:35.586 "is_configured": true, 00:19:35.586 "data_offset": 0, 00:19:35.586 "data_size": 65536 00:19:35.586 }, 00:19:35.586 { 00:19:35.586 "name": "BaseBdev3", 00:19:35.586 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:35.586 "is_configured": true, 00:19:35.586 "data_offset": 0, 00:19:35.586 "data_size": 65536 00:19:35.586 } 00:19:35.586 ] 00:19:35.586 }' 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.586 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.847 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:35.847 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.847 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.847 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.847 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.847 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6aaca1f6-396d-4a38-a8d8-c21dd8443ee0 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.107 [2024-11-08 17:07:12.628569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:36.107 [2024-11-08 17:07:12.628638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:36.107 [2024-11-08 17:07:12.628651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:36.107 [2024-11-08 17:07:12.629032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:36.107 [2024-11-08 17:07:12.629216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:36.107 [2024-11-08 17:07:12.629226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:36.107 [2024-11-08 17:07:12.629553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.107 NewBaseBdev 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.107 [ 00:19:36.107 { 00:19:36.107 "name": "NewBaseBdev", 00:19:36.107 "aliases": [ 00:19:36.107 "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0" 00:19:36.107 ], 00:19:36.107 "product_name": "Malloc disk", 00:19:36.107 "block_size": 512, 00:19:36.107 "num_blocks": 65536, 00:19:36.107 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:36.107 "assigned_rate_limits": { 00:19:36.107 "rw_ios_per_sec": 0, 00:19:36.107 "rw_mbytes_per_sec": 0, 00:19:36.107 "r_mbytes_per_sec": 0, 00:19:36.107 "w_mbytes_per_sec": 0 00:19:36.107 }, 00:19:36.107 "claimed": true, 00:19:36.107 "claim_type": "exclusive_write", 00:19:36.107 "zoned": false, 00:19:36.107 "supported_io_types": { 00:19:36.107 "read": true, 00:19:36.107 "write": true, 00:19:36.107 "unmap": true, 00:19:36.107 "flush": true, 00:19:36.107 "reset": true, 00:19:36.107 "nvme_admin": false, 00:19:36.107 "nvme_io": false, 00:19:36.107 "nvme_io_md": false, 00:19:36.107 "write_zeroes": true, 00:19:36.107 "zcopy": true, 00:19:36.107 "get_zone_info": false, 00:19:36.107 "zone_management": false, 00:19:36.107 "zone_append": false, 00:19:36.107 "compare": false, 00:19:36.107 "compare_and_write": false, 00:19:36.107 "abort": true, 00:19:36.107 "seek_hole": false, 00:19:36.107 "seek_data": false, 00:19:36.107 "copy": true, 00:19:36.107 "nvme_iov_md": false 00:19:36.107 }, 00:19:36.107 "memory_domains": [ 00:19:36.107 { 00:19:36.107 "dma_device_id": "system", 00:19:36.107 "dma_device_type": 1 00:19:36.107 }, 00:19:36.107 { 00:19:36.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.107 "dma_device_type": 2 00:19:36.107 } 00:19:36.107 ], 00:19:36.107 "driver_specific": {} 00:19:36.107 } 00:19:36.107 ] 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.107 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.108 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.108 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.108 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.108 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.108 "name": "Existed_Raid", 00:19:36.108 "uuid": "8cfbb104-a66b-4ca7-a8c5-ebe099976843", 00:19:36.108 "strip_size_kb": 64, 00:19:36.108 "state": "online", 00:19:36.108 "raid_level": "raid0", 00:19:36.108 "superblock": false, 00:19:36.108 "num_base_bdevs": 3, 00:19:36.108 "num_base_bdevs_discovered": 3, 00:19:36.108 "num_base_bdevs_operational": 3, 00:19:36.108 "base_bdevs_list": [ 00:19:36.108 { 00:19:36.108 "name": "NewBaseBdev", 00:19:36.108 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:36.108 "is_configured": true, 00:19:36.108 "data_offset": 0, 00:19:36.108 "data_size": 65536 00:19:36.108 }, 00:19:36.108 { 00:19:36.108 "name": "BaseBdev2", 00:19:36.108 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:36.108 "is_configured": true, 00:19:36.108 "data_offset": 0, 00:19:36.108 "data_size": 65536 00:19:36.108 }, 00:19:36.108 { 00:19:36.108 "name": "BaseBdev3", 00:19:36.108 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:36.108 "is_configured": true, 00:19:36.108 "data_offset": 0, 00:19:36.108 "data_size": 65536 00:19:36.108 } 00:19:36.108 ] 00:19:36.108 }' 00:19:36.108 17:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.108 17:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.366 [2024-11-08 17:07:13.029163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.366 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.366 "name": "Existed_Raid", 00:19:36.366 "aliases": [ 00:19:36.366 "8cfbb104-a66b-4ca7-a8c5-ebe099976843" 00:19:36.366 ], 00:19:36.366 "product_name": "Raid Volume", 00:19:36.366 "block_size": 512, 00:19:36.366 "num_blocks": 196608, 00:19:36.366 "uuid": "8cfbb104-a66b-4ca7-a8c5-ebe099976843", 00:19:36.366 "assigned_rate_limits": { 00:19:36.366 "rw_ios_per_sec": 0, 00:19:36.366 "rw_mbytes_per_sec": 0, 00:19:36.366 "r_mbytes_per_sec": 0, 00:19:36.366 "w_mbytes_per_sec": 0 00:19:36.366 }, 00:19:36.366 "claimed": false, 00:19:36.366 "zoned": false, 00:19:36.366 "supported_io_types": { 00:19:36.366 "read": true, 00:19:36.366 "write": true, 00:19:36.366 "unmap": true, 00:19:36.366 "flush": true, 00:19:36.366 "reset": true, 00:19:36.366 "nvme_admin": false, 00:19:36.366 "nvme_io": false, 00:19:36.366 "nvme_io_md": false, 00:19:36.366 "write_zeroes": true, 00:19:36.366 "zcopy": false, 00:19:36.366 "get_zone_info": false, 00:19:36.366 "zone_management": false, 00:19:36.366 "zone_append": false, 00:19:36.366 "compare": false, 00:19:36.366 "compare_and_write": false, 00:19:36.366 "abort": false, 00:19:36.366 "seek_hole": false, 00:19:36.366 "seek_data": false, 00:19:36.366 "copy": false, 00:19:36.366 "nvme_iov_md": false 00:19:36.366 }, 00:19:36.366 "memory_domains": [ 00:19:36.366 { 00:19:36.366 "dma_device_id": "system", 00:19:36.367 "dma_device_type": 1 00:19:36.367 }, 00:19:36.367 { 00:19:36.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.367 "dma_device_type": 2 00:19:36.367 }, 00:19:36.367 { 00:19:36.367 "dma_device_id": "system", 00:19:36.367 "dma_device_type": 1 00:19:36.367 }, 00:19:36.367 { 00:19:36.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.367 "dma_device_type": 2 00:19:36.367 }, 00:19:36.367 { 00:19:36.367 "dma_device_id": "system", 00:19:36.367 "dma_device_type": 1 00:19:36.367 }, 00:19:36.367 { 00:19:36.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.367 "dma_device_type": 2 00:19:36.367 } 00:19:36.367 ], 00:19:36.367 "driver_specific": { 00:19:36.367 "raid": { 00:19:36.367 "uuid": "8cfbb104-a66b-4ca7-a8c5-ebe099976843", 00:19:36.367 "strip_size_kb": 64, 00:19:36.367 "state": "online", 00:19:36.367 "raid_level": "raid0", 00:19:36.367 "superblock": false, 00:19:36.367 "num_base_bdevs": 3, 00:19:36.367 "num_base_bdevs_discovered": 3, 00:19:36.367 "num_base_bdevs_operational": 3, 00:19:36.367 "base_bdevs_list": [ 00:19:36.367 { 00:19:36.367 "name": "NewBaseBdev", 00:19:36.367 "uuid": "6aaca1f6-396d-4a38-a8d8-c21dd8443ee0", 00:19:36.367 "is_configured": true, 00:19:36.367 "data_offset": 0, 00:19:36.367 "data_size": 65536 00:19:36.367 }, 00:19:36.367 { 00:19:36.367 "name": "BaseBdev2", 00:19:36.367 "uuid": "1af9aefe-6b96-4fd4-9884-e569dd7a3c26", 00:19:36.367 "is_configured": true, 00:19:36.367 "data_offset": 0, 00:19:36.367 "data_size": 65536 00:19:36.367 }, 00:19:36.367 { 00:19:36.367 "name": "BaseBdev3", 00:19:36.367 "uuid": "e23e6b37-ff85-44eb-a6ce-cf44296d7aa7", 00:19:36.367 "is_configured": true, 00:19:36.367 "data_offset": 0, 00:19:36.367 "data_size": 65536 00:19:36.367 } 00:19:36.367 ] 00:19:36.367 } 00:19:36.367 } 00:19:36.367 }' 00:19:36.367 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:36.627 BaseBdev2 00:19:36.627 BaseBdev3' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.627 [2024-11-08 17:07:13.232790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:36.627 [2024-11-08 17:07:13.232962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.627 [2024-11-08 17:07:13.233101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.627 [2024-11-08 17:07:13.233184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.627 [2024-11-08 17:07:13.233200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62710 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62710 ']' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62710 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62710 00:19:36.627 killing process with pid 62710 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62710' 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62710 00:19:36.627 17:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62710 00:19:36.627 [2024-11-08 17:07:13.267585] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:36.913 [2024-11-08 17:07:13.492119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:37.860 00:19:37.860 real 0m8.313s 00:19:37.860 user 0m12.793s 00:19:37.860 sys 0m1.590s 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.860 ************************************ 00:19:37.860 END TEST raid_state_function_test 00:19:37.860 ************************************ 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.860 17:07:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:19:37.860 17:07:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:37.860 17:07:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:37.860 17:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.860 ************************************ 00:19:37.860 START TEST raid_state_function_test_sb 00:19:37.860 ************************************ 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:37.860 Process raid pid: 63309 00:19:37.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63309 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63309' 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63309 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63309 ']' 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.860 17:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.860 [2024-11-08 17:07:14.556266] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:37.860 [2024-11-08 17:07:14.557946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.124 [2024-11-08 17:07:14.745871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.385 [2024-11-08 17:07:14.909355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.385 [2024-11-08 17:07:15.092079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.385 [2024-11-08 17:07:15.092426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.959 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.959 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:38.959 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:38.959 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.959 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.959 [2024-11-08 17:07:15.416955] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.959 [2024-11-08 17:07:15.417043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.960 [2024-11-08 17:07:15.417057] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.960 [2024-11-08 17:07:15.417070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.960 [2024-11-08 17:07:15.417077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.960 [2024-11-08 17:07:15.417088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.960 "name": "Existed_Raid", 00:19:38.960 "uuid": "727396cd-1a76-4345-b9eb-a8ec83f46023", 00:19:38.960 "strip_size_kb": 64, 00:19:38.960 "state": "configuring", 00:19:38.960 "raid_level": "raid0", 00:19:38.960 "superblock": true, 00:19:38.960 "num_base_bdevs": 3, 00:19:38.960 "num_base_bdevs_discovered": 0, 00:19:38.960 "num_base_bdevs_operational": 3, 00:19:38.960 "base_bdevs_list": [ 00:19:38.960 { 00:19:38.960 "name": "BaseBdev1", 00:19:38.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.960 "is_configured": false, 00:19:38.960 "data_offset": 0, 00:19:38.960 "data_size": 0 00:19:38.960 }, 00:19:38.960 { 00:19:38.960 "name": "BaseBdev2", 00:19:38.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.960 "is_configured": false, 00:19:38.960 "data_offset": 0, 00:19:38.960 "data_size": 0 00:19:38.960 }, 00:19:38.960 { 00:19:38.960 "name": "BaseBdev3", 00:19:38.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.960 "is_configured": false, 00:19:38.960 "data_offset": 0, 00:19:38.960 "data_size": 0 00:19:38.960 } 00:19:38.960 ] 00:19:38.960 }' 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.960 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 [2024-11-08 17:07:15.732965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.222 [2024-11-08 17:07:15.733023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 [2024-11-08 17:07:15.740965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:39.222 [2024-11-08 17:07:15.741161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:39.222 [2024-11-08 17:07:15.741239] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.222 [2024-11-08 17:07:15.741270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.222 [2024-11-08 17:07:15.741289] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.222 [2024-11-08 17:07:15.741311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 BaseBdev1 00:19:39.222 [2024-11-08 17:07:15.782228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 [ 00:19:39.222 { 00:19:39.222 "name": "BaseBdev1", 00:19:39.222 "aliases": [ 00:19:39.222 "1f9910e7-284e-4f0c-b9ae-963d6b1e814d" 00:19:39.222 ], 00:19:39.222 "product_name": "Malloc disk", 00:19:39.222 "block_size": 512, 00:19:39.222 "num_blocks": 65536, 00:19:39.222 "uuid": "1f9910e7-284e-4f0c-b9ae-963d6b1e814d", 00:19:39.222 "assigned_rate_limits": { 00:19:39.222 "rw_ios_per_sec": 0, 00:19:39.222 "rw_mbytes_per_sec": 0, 00:19:39.222 "r_mbytes_per_sec": 0, 00:19:39.222 "w_mbytes_per_sec": 0 00:19:39.222 }, 00:19:39.222 "claimed": true, 00:19:39.222 "claim_type": "exclusive_write", 00:19:39.222 "zoned": false, 00:19:39.222 "supported_io_types": { 00:19:39.222 "read": true, 00:19:39.222 "write": true, 00:19:39.222 "unmap": true, 00:19:39.222 "flush": true, 00:19:39.222 "reset": true, 00:19:39.222 "nvme_admin": false, 00:19:39.222 "nvme_io": false, 00:19:39.222 "nvme_io_md": false, 00:19:39.222 "write_zeroes": true, 00:19:39.222 "zcopy": true, 00:19:39.222 "get_zone_info": false, 00:19:39.222 "zone_management": false, 00:19:39.222 "zone_append": false, 00:19:39.222 "compare": false, 00:19:39.222 "compare_and_write": false, 00:19:39.222 "abort": true, 00:19:39.222 "seek_hole": false, 00:19:39.222 "seek_data": false, 00:19:39.222 "copy": true, 00:19:39.222 "nvme_iov_md": false 00:19:39.222 }, 00:19:39.222 "memory_domains": [ 00:19:39.222 { 00:19:39.222 "dma_device_id": "system", 00:19:39.222 "dma_device_type": 1 00:19:39.222 }, 00:19:39.222 { 00:19:39.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.222 "dma_device_type": 2 00:19:39.222 } 00:19:39.222 ], 00:19:39.222 "driver_specific": {} 00:19:39.222 } 00:19:39.222 ] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.222 "name": "Existed_Raid", 00:19:39.222 "uuid": "c93a03fa-6ee2-481a-91e7-09185e204e1f", 00:19:39.222 "strip_size_kb": 64, 00:19:39.222 "state": "configuring", 00:19:39.222 "raid_level": "raid0", 00:19:39.222 "superblock": true, 00:19:39.222 "num_base_bdevs": 3, 00:19:39.222 "num_base_bdevs_discovered": 1, 00:19:39.222 "num_base_bdevs_operational": 3, 00:19:39.222 "base_bdevs_list": [ 00:19:39.222 { 00:19:39.222 "name": "BaseBdev1", 00:19:39.222 "uuid": "1f9910e7-284e-4f0c-b9ae-963d6b1e814d", 00:19:39.222 "is_configured": true, 00:19:39.222 "data_offset": 2048, 00:19:39.222 "data_size": 63488 00:19:39.222 }, 00:19:39.222 { 00:19:39.222 "name": "BaseBdev2", 00:19:39.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.222 "is_configured": false, 00:19:39.222 "data_offset": 0, 00:19:39.222 "data_size": 0 00:19:39.222 }, 00:19:39.222 { 00:19:39.222 "name": "BaseBdev3", 00:19:39.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.222 "is_configured": false, 00:19:39.222 "data_offset": 0, 00:19:39.222 "data_size": 0 00:19:39.222 } 00:19:39.222 ] 00:19:39.222 }' 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.222 17:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.484 [2024-11-08 17:07:16.126388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.484 [2024-11-08 17:07:16.126468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.484 [2024-11-08 17:07:16.134457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.484 [2024-11-08 17:07:16.137096] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.484 [2024-11-08 17:07:16.137160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.484 [2024-11-08 17:07:16.137173] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.484 [2024-11-08 17:07:16.137184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.484 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.485 "name": "Existed_Raid", 00:19:39.485 "uuid": "9e17f4a5-6230-4eb3-8ad2-59e484880b73", 00:19:39.485 "strip_size_kb": 64, 00:19:39.485 "state": "configuring", 00:19:39.485 "raid_level": "raid0", 00:19:39.485 "superblock": true, 00:19:39.485 "num_base_bdevs": 3, 00:19:39.485 "num_base_bdevs_discovered": 1, 00:19:39.485 "num_base_bdevs_operational": 3, 00:19:39.485 "base_bdevs_list": [ 00:19:39.485 { 00:19:39.485 "name": "BaseBdev1", 00:19:39.485 "uuid": "1f9910e7-284e-4f0c-b9ae-963d6b1e814d", 00:19:39.485 "is_configured": true, 00:19:39.485 "data_offset": 2048, 00:19:39.485 "data_size": 63488 00:19:39.485 }, 00:19:39.485 { 00:19:39.485 "name": "BaseBdev2", 00:19:39.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.485 "is_configured": false, 00:19:39.485 "data_offset": 0, 00:19:39.485 "data_size": 0 00:19:39.485 }, 00:19:39.485 { 00:19:39.485 "name": "BaseBdev3", 00:19:39.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.485 "is_configured": false, 00:19:39.485 "data_offset": 0, 00:19:39.485 "data_size": 0 00:19:39.485 } 00:19:39.485 ] 00:19:39.485 }' 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.485 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.057 BaseBdev2 00:19:40.057 [2024-11-08 17:07:16.497587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.057 [ 00:19:40.057 { 00:19:40.057 "name": "BaseBdev2", 00:19:40.057 "aliases": [ 00:19:40.057 "9503a5e1-cd0d-4485-a562-20f0e1719dc0" 00:19:40.057 ], 00:19:40.057 "product_name": "Malloc disk", 00:19:40.057 "block_size": 512, 00:19:40.057 "num_blocks": 65536, 00:19:40.057 "uuid": "9503a5e1-cd0d-4485-a562-20f0e1719dc0", 00:19:40.057 "assigned_rate_limits": { 00:19:40.057 "rw_ios_per_sec": 0, 00:19:40.057 "rw_mbytes_per_sec": 0, 00:19:40.057 "r_mbytes_per_sec": 0, 00:19:40.057 "w_mbytes_per_sec": 0 00:19:40.057 }, 00:19:40.057 "claimed": true, 00:19:40.057 "claim_type": "exclusive_write", 00:19:40.057 "zoned": false, 00:19:40.057 "supported_io_types": { 00:19:40.057 "read": true, 00:19:40.057 "write": true, 00:19:40.057 "unmap": true, 00:19:40.057 "flush": true, 00:19:40.057 "reset": true, 00:19:40.057 "nvme_admin": false, 00:19:40.057 "nvme_io": false, 00:19:40.057 "nvme_io_md": false, 00:19:40.057 "write_zeroes": true, 00:19:40.057 "zcopy": true, 00:19:40.057 "get_zone_info": false, 00:19:40.057 "zone_management": false, 00:19:40.057 "zone_append": false, 00:19:40.057 "compare": false, 00:19:40.057 "compare_and_write": false, 00:19:40.057 "abort": true, 00:19:40.057 "seek_hole": false, 00:19:40.057 "seek_data": false, 00:19:40.057 "copy": true, 00:19:40.057 "nvme_iov_md": false 00:19:40.057 }, 00:19:40.057 "memory_domains": [ 00:19:40.057 { 00:19:40.057 "dma_device_id": "system", 00:19:40.057 "dma_device_type": 1 00:19:40.057 }, 00:19:40.057 { 00:19:40.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.057 "dma_device_type": 2 00:19:40.057 } 00:19:40.057 ], 00:19:40.057 "driver_specific": {} 00:19:40.057 } 00:19:40.057 ] 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.057 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.057 "name": "Existed_Raid", 00:19:40.057 "uuid": "9e17f4a5-6230-4eb3-8ad2-59e484880b73", 00:19:40.057 "strip_size_kb": 64, 00:19:40.057 "state": "configuring", 00:19:40.057 "raid_level": "raid0", 00:19:40.057 "superblock": true, 00:19:40.057 "num_base_bdevs": 3, 00:19:40.057 "num_base_bdevs_discovered": 2, 00:19:40.057 "num_base_bdevs_operational": 3, 00:19:40.057 "base_bdevs_list": [ 00:19:40.057 { 00:19:40.057 "name": "BaseBdev1", 00:19:40.057 "uuid": "1f9910e7-284e-4f0c-b9ae-963d6b1e814d", 00:19:40.058 "is_configured": true, 00:19:40.058 "data_offset": 2048, 00:19:40.058 "data_size": 63488 00:19:40.058 }, 00:19:40.058 { 00:19:40.058 "name": "BaseBdev2", 00:19:40.058 "uuid": "9503a5e1-cd0d-4485-a562-20f0e1719dc0", 00:19:40.058 "is_configured": true, 00:19:40.058 "data_offset": 2048, 00:19:40.058 "data_size": 63488 00:19:40.058 }, 00:19:40.058 { 00:19:40.058 "name": "BaseBdev3", 00:19:40.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.058 "is_configured": false, 00:19:40.058 "data_offset": 0, 00:19:40.058 "data_size": 0 00:19:40.058 } 00:19:40.058 ] 00:19:40.058 }' 00:19:40.058 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.058 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.319 [2024-11-08 17:07:16.892391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:40.319 [2024-11-08 17:07:16.893028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:40.319 [2024-11-08 17:07:16.893181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:40.319 BaseBdev3 00:19:40.319 [2024-11-08 17:07:16.893561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:40.319 [2024-11-08 17:07:16.893864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:40.319 [2024-11-08 17:07:16.893879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:40.319 [2024-11-08 17:07:16.894055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.319 [ 00:19:40.319 { 00:19:40.319 "name": "BaseBdev3", 00:19:40.319 "aliases": [ 00:19:40.319 "0ef65e65-f18a-4637-84e0-879311ee75a3" 00:19:40.319 ], 00:19:40.319 "product_name": "Malloc disk", 00:19:40.319 "block_size": 512, 00:19:40.319 "num_blocks": 65536, 00:19:40.319 "uuid": "0ef65e65-f18a-4637-84e0-879311ee75a3", 00:19:40.319 "assigned_rate_limits": { 00:19:40.319 "rw_ios_per_sec": 0, 00:19:40.319 "rw_mbytes_per_sec": 0, 00:19:40.319 "r_mbytes_per_sec": 0, 00:19:40.319 "w_mbytes_per_sec": 0 00:19:40.319 }, 00:19:40.319 "claimed": true, 00:19:40.319 "claim_type": "exclusive_write", 00:19:40.319 "zoned": false, 00:19:40.319 "supported_io_types": { 00:19:40.319 "read": true, 00:19:40.319 "write": true, 00:19:40.319 "unmap": true, 00:19:40.319 "flush": true, 00:19:40.319 "reset": true, 00:19:40.319 "nvme_admin": false, 00:19:40.319 "nvme_io": false, 00:19:40.319 "nvme_io_md": false, 00:19:40.319 "write_zeroes": true, 00:19:40.319 "zcopy": true, 00:19:40.319 "get_zone_info": false, 00:19:40.319 "zone_management": false, 00:19:40.319 "zone_append": false, 00:19:40.319 "compare": false, 00:19:40.319 "compare_and_write": false, 00:19:40.319 "abort": true, 00:19:40.319 "seek_hole": false, 00:19:40.319 "seek_data": false, 00:19:40.319 "copy": true, 00:19:40.319 "nvme_iov_md": false 00:19:40.319 }, 00:19:40.319 "memory_domains": [ 00:19:40.319 { 00:19:40.319 "dma_device_id": "system", 00:19:40.319 "dma_device_type": 1 00:19:40.319 }, 00:19:40.319 { 00:19:40.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.319 "dma_device_type": 2 00:19:40.319 } 00:19:40.319 ], 00:19:40.319 "driver_specific": {} 00:19:40.319 } 00:19:40.319 ] 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.319 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.319 "name": "Existed_Raid", 00:19:40.319 "uuid": "9e17f4a5-6230-4eb3-8ad2-59e484880b73", 00:19:40.319 "strip_size_kb": 64, 00:19:40.319 "state": "online", 00:19:40.319 "raid_level": "raid0", 00:19:40.319 "superblock": true, 00:19:40.319 "num_base_bdevs": 3, 00:19:40.319 "num_base_bdevs_discovered": 3, 00:19:40.319 "num_base_bdevs_operational": 3, 00:19:40.319 "base_bdevs_list": [ 00:19:40.319 { 00:19:40.319 "name": "BaseBdev1", 00:19:40.319 "uuid": "1f9910e7-284e-4f0c-b9ae-963d6b1e814d", 00:19:40.319 "is_configured": true, 00:19:40.319 "data_offset": 2048, 00:19:40.319 "data_size": 63488 00:19:40.319 }, 00:19:40.319 { 00:19:40.319 "name": "BaseBdev2", 00:19:40.319 "uuid": "9503a5e1-cd0d-4485-a562-20f0e1719dc0", 00:19:40.319 "is_configured": true, 00:19:40.319 "data_offset": 2048, 00:19:40.319 "data_size": 63488 00:19:40.319 }, 00:19:40.319 { 00:19:40.319 "name": "BaseBdev3", 00:19:40.319 "uuid": "0ef65e65-f18a-4637-84e0-879311ee75a3", 00:19:40.319 "is_configured": true, 00:19:40.320 "data_offset": 2048, 00:19:40.320 "data_size": 63488 00:19:40.320 } 00:19:40.320 ] 00:19:40.320 }' 00:19:40.320 17:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.320 17:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.591 [2024-11-08 17:07:17.273027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:40.591 "name": "Existed_Raid", 00:19:40.591 "aliases": [ 00:19:40.591 "9e17f4a5-6230-4eb3-8ad2-59e484880b73" 00:19:40.591 ], 00:19:40.591 "product_name": "Raid Volume", 00:19:40.591 "block_size": 512, 00:19:40.591 "num_blocks": 190464, 00:19:40.591 "uuid": "9e17f4a5-6230-4eb3-8ad2-59e484880b73", 00:19:40.591 "assigned_rate_limits": { 00:19:40.591 "rw_ios_per_sec": 0, 00:19:40.591 "rw_mbytes_per_sec": 0, 00:19:40.591 "r_mbytes_per_sec": 0, 00:19:40.591 "w_mbytes_per_sec": 0 00:19:40.591 }, 00:19:40.591 "claimed": false, 00:19:40.591 "zoned": false, 00:19:40.591 "supported_io_types": { 00:19:40.591 "read": true, 00:19:40.591 "write": true, 00:19:40.591 "unmap": true, 00:19:40.591 "flush": true, 00:19:40.591 "reset": true, 00:19:40.591 "nvme_admin": false, 00:19:40.591 "nvme_io": false, 00:19:40.591 "nvme_io_md": false, 00:19:40.591 "write_zeroes": true, 00:19:40.591 "zcopy": false, 00:19:40.591 "get_zone_info": false, 00:19:40.591 "zone_management": false, 00:19:40.591 "zone_append": false, 00:19:40.591 "compare": false, 00:19:40.591 "compare_and_write": false, 00:19:40.591 "abort": false, 00:19:40.591 "seek_hole": false, 00:19:40.591 "seek_data": false, 00:19:40.591 "copy": false, 00:19:40.591 "nvme_iov_md": false 00:19:40.591 }, 00:19:40.591 "memory_domains": [ 00:19:40.591 { 00:19:40.591 "dma_device_id": "system", 00:19:40.591 "dma_device_type": 1 00:19:40.591 }, 00:19:40.591 { 00:19:40.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.591 "dma_device_type": 2 00:19:40.591 }, 00:19:40.591 { 00:19:40.591 "dma_device_id": "system", 00:19:40.591 "dma_device_type": 1 00:19:40.591 }, 00:19:40.591 { 00:19:40.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.591 "dma_device_type": 2 00:19:40.591 }, 00:19:40.591 { 00:19:40.591 "dma_device_id": "system", 00:19:40.591 "dma_device_type": 1 00:19:40.591 }, 00:19:40.591 { 00:19:40.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.591 "dma_device_type": 2 00:19:40.591 } 00:19:40.591 ], 00:19:40.591 "driver_specific": { 00:19:40.591 "raid": { 00:19:40.591 "uuid": "9e17f4a5-6230-4eb3-8ad2-59e484880b73", 00:19:40.591 "strip_size_kb": 64, 00:19:40.591 "state": "online", 00:19:40.591 "raid_level": "raid0", 00:19:40.591 "superblock": true, 00:19:40.591 "num_base_bdevs": 3, 00:19:40.591 "num_base_bdevs_discovered": 3, 00:19:40.591 "num_base_bdevs_operational": 3, 00:19:40.591 "base_bdevs_list": [ 00:19:40.591 { 00:19:40.591 "name": "BaseBdev1", 00:19:40.591 "uuid": "1f9910e7-284e-4f0c-b9ae-963d6b1e814d", 00:19:40.591 "is_configured": true, 00:19:40.591 "data_offset": 2048, 00:19:40.591 "data_size": 63488 00:19:40.591 }, 00:19:40.591 { 00:19:40.591 "name": "BaseBdev2", 00:19:40.591 "uuid": "9503a5e1-cd0d-4485-a562-20f0e1719dc0", 00:19:40.591 "is_configured": true, 00:19:40.591 "data_offset": 2048, 00:19:40.591 "data_size": 63488 00:19:40.591 }, 00:19:40.591 { 00:19:40.591 "name": "BaseBdev3", 00:19:40.591 "uuid": "0ef65e65-f18a-4637-84e0-879311ee75a3", 00:19:40.591 "is_configured": true, 00:19:40.591 "data_offset": 2048, 00:19:40.591 "data_size": 63488 00:19:40.591 } 00:19:40.591 ] 00:19:40.591 } 00:19:40.591 } 00:19:40.591 }' 00:19:40.591 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:40.852 BaseBdev2 00:19:40.852 BaseBdev3' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.852 [2024-11-08 17:07:17.468718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:40.852 [2024-11-08 17:07:17.468917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.852 [2024-11-08 17:07:17.469064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.852 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.113 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.113 "name": "Existed_Raid", 00:19:41.113 "uuid": "9e17f4a5-6230-4eb3-8ad2-59e484880b73", 00:19:41.113 "strip_size_kb": 64, 00:19:41.113 "state": "offline", 00:19:41.113 "raid_level": "raid0", 00:19:41.113 "superblock": true, 00:19:41.113 "num_base_bdevs": 3, 00:19:41.113 "num_base_bdevs_discovered": 2, 00:19:41.113 "num_base_bdevs_operational": 2, 00:19:41.113 "base_bdevs_list": [ 00:19:41.113 { 00:19:41.113 "name": null, 00:19:41.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.113 "is_configured": false, 00:19:41.113 "data_offset": 0, 00:19:41.113 "data_size": 63488 00:19:41.113 }, 00:19:41.113 { 00:19:41.113 "name": "BaseBdev2", 00:19:41.113 "uuid": "9503a5e1-cd0d-4485-a562-20f0e1719dc0", 00:19:41.113 "is_configured": true, 00:19:41.113 "data_offset": 2048, 00:19:41.113 "data_size": 63488 00:19:41.113 }, 00:19:41.113 { 00:19:41.113 "name": "BaseBdev3", 00:19:41.113 "uuid": "0ef65e65-f18a-4637-84e0-879311ee75a3", 00:19:41.113 "is_configured": true, 00:19:41.113 "data_offset": 2048, 00:19:41.113 "data_size": 63488 00:19:41.113 } 00:19:41.113 ] 00:19:41.113 }' 00:19:41.113 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.113 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.373 [2024-11-08 17:07:17.888803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.373 17:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.373 [2024-11-08 17:07:18.004231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:41.373 [2024-11-08 17:07:18.004486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:41.373 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.373 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:41.373 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:41.373 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.373 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:41.373 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.373 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.634 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.634 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:41.634 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:41.634 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:41.634 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:41.634 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:41.634 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 BaseBdev2 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 [ 00:19:41.635 { 00:19:41.635 "name": "BaseBdev2", 00:19:41.635 "aliases": [ 00:19:41.635 "1060758a-e2ab-4005-8e22-a87d089d57c4" 00:19:41.635 ], 00:19:41.635 "product_name": "Malloc disk", 00:19:41.635 "block_size": 512, 00:19:41.635 "num_blocks": 65536, 00:19:41.635 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:41.635 "assigned_rate_limits": { 00:19:41.635 "rw_ios_per_sec": 0, 00:19:41.635 "rw_mbytes_per_sec": 0, 00:19:41.635 "r_mbytes_per_sec": 0, 00:19:41.635 "w_mbytes_per_sec": 0 00:19:41.635 }, 00:19:41.635 "claimed": false, 00:19:41.635 "zoned": false, 00:19:41.635 "supported_io_types": { 00:19:41.635 "read": true, 00:19:41.635 "write": true, 00:19:41.635 "unmap": true, 00:19:41.635 "flush": true, 00:19:41.635 "reset": true, 00:19:41.635 "nvme_admin": false, 00:19:41.635 "nvme_io": false, 00:19:41.635 "nvme_io_md": false, 00:19:41.635 "write_zeroes": true, 00:19:41.635 "zcopy": true, 00:19:41.635 "get_zone_info": false, 00:19:41.635 "zone_management": false, 00:19:41.635 "zone_append": false, 00:19:41.635 "compare": false, 00:19:41.635 "compare_and_write": false, 00:19:41.635 "abort": true, 00:19:41.635 "seek_hole": false, 00:19:41.635 "seek_data": false, 00:19:41.635 "copy": true, 00:19:41.635 "nvme_iov_md": false 00:19:41.635 }, 00:19:41.635 "memory_domains": [ 00:19:41.635 { 00:19:41.635 "dma_device_id": "system", 00:19:41.635 "dma_device_type": 1 00:19:41.635 }, 00:19:41.635 { 00:19:41.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.635 "dma_device_type": 2 00:19:41.635 } 00:19:41.635 ], 00:19:41.635 "driver_specific": {} 00:19:41.635 } 00:19:41.635 ] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 BaseBdev3 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 [ 00:19:41.635 { 00:19:41.635 "name": "BaseBdev3", 00:19:41.635 "aliases": [ 00:19:41.635 "91f37e4c-adbf-4135-8c03-b343f069afc7" 00:19:41.635 ], 00:19:41.635 "product_name": "Malloc disk", 00:19:41.635 "block_size": 512, 00:19:41.635 "num_blocks": 65536, 00:19:41.635 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:41.635 "assigned_rate_limits": { 00:19:41.635 "rw_ios_per_sec": 0, 00:19:41.635 "rw_mbytes_per_sec": 0, 00:19:41.635 "r_mbytes_per_sec": 0, 00:19:41.635 "w_mbytes_per_sec": 0 00:19:41.635 }, 00:19:41.635 "claimed": false, 00:19:41.635 "zoned": false, 00:19:41.635 "supported_io_types": { 00:19:41.635 "read": true, 00:19:41.635 "write": true, 00:19:41.635 "unmap": true, 00:19:41.635 "flush": true, 00:19:41.635 "reset": true, 00:19:41.635 "nvme_admin": false, 00:19:41.635 "nvme_io": false, 00:19:41.635 "nvme_io_md": false, 00:19:41.635 "write_zeroes": true, 00:19:41.635 "zcopy": true, 00:19:41.635 "get_zone_info": false, 00:19:41.635 "zone_management": false, 00:19:41.635 "zone_append": false, 00:19:41.635 "compare": false, 00:19:41.635 "compare_and_write": false, 00:19:41.635 "abort": true, 00:19:41.635 "seek_hole": false, 00:19:41.635 "seek_data": false, 00:19:41.635 "copy": true, 00:19:41.635 "nvme_iov_md": false 00:19:41.635 }, 00:19:41.635 "memory_domains": [ 00:19:41.635 { 00:19:41.635 "dma_device_id": "system", 00:19:41.635 "dma_device_type": 1 00:19:41.635 }, 00:19:41.635 { 00:19:41.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.635 "dma_device_type": 2 00:19:41.635 } 00:19:41.635 ], 00:19:41.635 "driver_specific": {} 00:19:41.635 } 00:19:41.635 ] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 [2024-11-08 17:07:18.260082] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.635 [2024-11-08 17:07:18.260308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.635 [2024-11-08 17:07:18.260403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.635 [2024-11-08 17:07:18.262965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.635 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.635 "name": "Existed_Raid", 00:19:41.635 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:41.635 "strip_size_kb": 64, 00:19:41.635 "state": "configuring", 00:19:41.635 "raid_level": "raid0", 00:19:41.635 "superblock": true, 00:19:41.635 "num_base_bdevs": 3, 00:19:41.635 "num_base_bdevs_discovered": 2, 00:19:41.635 "num_base_bdevs_operational": 3, 00:19:41.636 "base_bdevs_list": [ 00:19:41.636 { 00:19:41.636 "name": "BaseBdev1", 00:19:41.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.636 "is_configured": false, 00:19:41.636 "data_offset": 0, 00:19:41.636 "data_size": 0 00:19:41.636 }, 00:19:41.636 { 00:19:41.636 "name": "BaseBdev2", 00:19:41.636 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:41.636 "is_configured": true, 00:19:41.636 "data_offset": 2048, 00:19:41.636 "data_size": 63488 00:19:41.636 }, 00:19:41.636 { 00:19:41.636 "name": "BaseBdev3", 00:19:41.636 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:41.636 "is_configured": true, 00:19:41.636 "data_offset": 2048, 00:19:41.636 "data_size": 63488 00:19:41.636 } 00:19:41.636 ] 00:19:41.636 }' 00:19:41.636 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.636 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.898 [2024-11-08 17:07:18.580156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.898 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.159 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.159 "name": "Existed_Raid", 00:19:42.159 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:42.159 "strip_size_kb": 64, 00:19:42.159 "state": "configuring", 00:19:42.159 "raid_level": "raid0", 00:19:42.159 "superblock": true, 00:19:42.159 "num_base_bdevs": 3, 00:19:42.159 "num_base_bdevs_discovered": 1, 00:19:42.159 "num_base_bdevs_operational": 3, 00:19:42.159 "base_bdevs_list": [ 00:19:42.159 { 00:19:42.159 "name": "BaseBdev1", 00:19:42.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.159 "is_configured": false, 00:19:42.159 "data_offset": 0, 00:19:42.159 "data_size": 0 00:19:42.159 }, 00:19:42.159 { 00:19:42.159 "name": null, 00:19:42.159 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:42.159 "is_configured": false, 00:19:42.159 "data_offset": 0, 00:19:42.159 "data_size": 63488 00:19:42.159 }, 00:19:42.159 { 00:19:42.159 "name": "BaseBdev3", 00:19:42.159 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:42.159 "is_configured": true, 00:19:42.159 "data_offset": 2048, 00:19:42.159 "data_size": 63488 00:19:42.159 } 00:19:42.159 ] 00:19:42.159 }' 00:19:42.159 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.159 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.419 17:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.419 [2024-11-08 17:07:19.007402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.419 BaseBdev1 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.419 [ 00:19:42.419 { 00:19:42.419 "name": "BaseBdev1", 00:19:42.419 "aliases": [ 00:19:42.419 "6390b887-fad8-45c4-96aa-aa923cee0a17" 00:19:42.419 ], 00:19:42.419 "product_name": "Malloc disk", 00:19:42.419 "block_size": 512, 00:19:42.419 "num_blocks": 65536, 00:19:42.419 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:42.419 "assigned_rate_limits": { 00:19:42.419 "rw_ios_per_sec": 0, 00:19:42.419 "rw_mbytes_per_sec": 0, 00:19:42.419 "r_mbytes_per_sec": 0, 00:19:42.419 "w_mbytes_per_sec": 0 00:19:42.419 }, 00:19:42.419 "claimed": true, 00:19:42.419 "claim_type": "exclusive_write", 00:19:42.419 "zoned": false, 00:19:42.419 "supported_io_types": { 00:19:42.419 "read": true, 00:19:42.419 "write": true, 00:19:42.419 "unmap": true, 00:19:42.419 "flush": true, 00:19:42.419 "reset": true, 00:19:42.419 "nvme_admin": false, 00:19:42.419 "nvme_io": false, 00:19:42.419 "nvme_io_md": false, 00:19:42.419 "write_zeroes": true, 00:19:42.419 "zcopy": true, 00:19:42.419 "get_zone_info": false, 00:19:42.419 "zone_management": false, 00:19:42.419 "zone_append": false, 00:19:42.419 "compare": false, 00:19:42.419 "compare_and_write": false, 00:19:42.419 "abort": true, 00:19:42.419 "seek_hole": false, 00:19:42.419 "seek_data": false, 00:19:42.419 "copy": true, 00:19:42.419 "nvme_iov_md": false 00:19:42.419 }, 00:19:42.419 "memory_domains": [ 00:19:42.419 { 00:19:42.419 "dma_device_id": "system", 00:19:42.419 "dma_device_type": 1 00:19:42.419 }, 00:19:42.419 { 00:19:42.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.419 "dma_device_type": 2 00:19:42.419 } 00:19:42.419 ], 00:19:42.419 "driver_specific": {} 00:19:42.419 } 00:19:42.419 ] 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.419 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.419 "name": "Existed_Raid", 00:19:42.419 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:42.420 "strip_size_kb": 64, 00:19:42.420 "state": "configuring", 00:19:42.420 "raid_level": "raid0", 00:19:42.420 "superblock": true, 00:19:42.420 "num_base_bdevs": 3, 00:19:42.420 "num_base_bdevs_discovered": 2, 00:19:42.420 "num_base_bdevs_operational": 3, 00:19:42.420 "base_bdevs_list": [ 00:19:42.420 { 00:19:42.420 "name": "BaseBdev1", 00:19:42.420 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:42.420 "is_configured": true, 00:19:42.420 "data_offset": 2048, 00:19:42.420 "data_size": 63488 00:19:42.420 }, 00:19:42.420 { 00:19:42.420 "name": null, 00:19:42.420 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:42.420 "is_configured": false, 00:19:42.420 "data_offset": 0, 00:19:42.420 "data_size": 63488 00:19:42.420 }, 00:19:42.420 { 00:19:42.420 "name": "BaseBdev3", 00:19:42.420 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:42.420 "is_configured": true, 00:19:42.420 "data_offset": 2048, 00:19:42.420 "data_size": 63488 00:19:42.420 } 00:19:42.420 ] 00:19:42.420 }' 00:19:42.420 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.420 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.679 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.679 [2024-11-08 17:07:19.387593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.941 "name": "Existed_Raid", 00:19:42.941 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:42.941 "strip_size_kb": 64, 00:19:42.941 "state": "configuring", 00:19:42.941 "raid_level": "raid0", 00:19:42.941 "superblock": true, 00:19:42.941 "num_base_bdevs": 3, 00:19:42.941 "num_base_bdevs_discovered": 1, 00:19:42.941 "num_base_bdevs_operational": 3, 00:19:42.941 "base_bdevs_list": [ 00:19:42.941 { 00:19:42.941 "name": "BaseBdev1", 00:19:42.941 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:42.941 "is_configured": true, 00:19:42.941 "data_offset": 2048, 00:19:42.941 "data_size": 63488 00:19:42.941 }, 00:19:42.941 { 00:19:42.941 "name": null, 00:19:42.941 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:42.941 "is_configured": false, 00:19:42.941 "data_offset": 0, 00:19:42.941 "data_size": 63488 00:19:42.941 }, 00:19:42.941 { 00:19:42.941 "name": null, 00:19:42.941 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:42.941 "is_configured": false, 00:19:42.941 "data_offset": 0, 00:19:42.941 "data_size": 63488 00:19:42.941 } 00:19:42.941 ] 00:19:42.941 }' 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.941 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.202 [2024-11-08 17:07:19.747672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.202 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.203 "name": "Existed_Raid", 00:19:43.203 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:43.203 "strip_size_kb": 64, 00:19:43.203 "state": "configuring", 00:19:43.203 "raid_level": "raid0", 00:19:43.203 "superblock": true, 00:19:43.203 "num_base_bdevs": 3, 00:19:43.203 "num_base_bdevs_discovered": 2, 00:19:43.203 "num_base_bdevs_operational": 3, 00:19:43.203 "base_bdevs_list": [ 00:19:43.203 { 00:19:43.203 "name": "BaseBdev1", 00:19:43.203 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:43.203 "is_configured": true, 00:19:43.203 "data_offset": 2048, 00:19:43.203 "data_size": 63488 00:19:43.203 }, 00:19:43.203 { 00:19:43.203 "name": null, 00:19:43.203 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:43.203 "is_configured": false, 00:19:43.203 "data_offset": 0, 00:19:43.203 "data_size": 63488 00:19:43.203 }, 00:19:43.203 { 00:19:43.203 "name": "BaseBdev3", 00:19:43.203 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:43.203 "is_configured": true, 00:19:43.203 "data_offset": 2048, 00:19:43.203 "data_size": 63488 00:19:43.203 } 00:19:43.203 ] 00:19:43.203 }' 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.203 17:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.465 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.465 [2024-11-08 17:07:20.127837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.726 "name": "Existed_Raid", 00:19:43.726 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:43.726 "strip_size_kb": 64, 00:19:43.726 "state": "configuring", 00:19:43.726 "raid_level": "raid0", 00:19:43.726 "superblock": true, 00:19:43.726 "num_base_bdevs": 3, 00:19:43.726 "num_base_bdevs_discovered": 1, 00:19:43.726 "num_base_bdevs_operational": 3, 00:19:43.726 "base_bdevs_list": [ 00:19:43.726 { 00:19:43.726 "name": null, 00:19:43.726 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:43.726 "is_configured": false, 00:19:43.726 "data_offset": 0, 00:19:43.726 "data_size": 63488 00:19:43.726 }, 00:19:43.726 { 00:19:43.726 "name": null, 00:19:43.726 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:43.726 "is_configured": false, 00:19:43.726 "data_offset": 0, 00:19:43.726 "data_size": 63488 00:19:43.726 }, 00:19:43.726 { 00:19:43.726 "name": "BaseBdev3", 00:19:43.726 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:43.726 "is_configured": true, 00:19:43.726 "data_offset": 2048, 00:19:43.726 "data_size": 63488 00:19:43.726 } 00:19:43.726 ] 00:19:43.726 }' 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.726 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 [2024-11-08 17:07:20.582798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.987 "name": "Existed_Raid", 00:19:43.987 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:43.987 "strip_size_kb": 64, 00:19:43.987 "state": "configuring", 00:19:43.987 "raid_level": "raid0", 00:19:43.987 "superblock": true, 00:19:43.987 "num_base_bdevs": 3, 00:19:43.987 "num_base_bdevs_discovered": 2, 00:19:43.987 "num_base_bdevs_operational": 3, 00:19:43.987 "base_bdevs_list": [ 00:19:43.987 { 00:19:43.987 "name": null, 00:19:43.987 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:43.987 "is_configured": false, 00:19:43.987 "data_offset": 0, 00:19:43.987 "data_size": 63488 00:19:43.987 }, 00:19:43.987 { 00:19:43.987 "name": "BaseBdev2", 00:19:43.987 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:43.987 "is_configured": true, 00:19:43.987 "data_offset": 2048, 00:19:43.987 "data_size": 63488 00:19:43.987 }, 00:19:43.987 { 00:19:43.987 "name": "BaseBdev3", 00:19:43.987 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:43.987 "is_configured": true, 00:19:43.987 "data_offset": 2048, 00:19:43.987 "data_size": 63488 00:19:43.987 } 00:19:43.987 ] 00:19:43.987 }' 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.987 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.249 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:44.249 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.249 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.249 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.249 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.249 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:44.249 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:44.511 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.511 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.511 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.511 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.511 17:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6390b887-fad8-45c4-96aa-aa923cee0a17 00:19:44.511 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.511 17:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.511 [2024-11-08 17:07:21.026515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:44.511 NewBaseBdev 00:19:44.511 [2024-11-08 17:07:21.027123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:44.511 [2024-11-08 17:07:21.027157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:44.511 [2024-11-08 17:07:21.027480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:44.511 [2024-11-08 17:07:21.027643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:44.511 [2024-11-08 17:07:21.027653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:44.511 [2024-11-08 17:07:21.027830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.511 [ 00:19:44.511 { 00:19:44.511 "name": "NewBaseBdev", 00:19:44.511 "aliases": [ 00:19:44.511 "6390b887-fad8-45c4-96aa-aa923cee0a17" 00:19:44.511 ], 00:19:44.511 "product_name": "Malloc disk", 00:19:44.511 "block_size": 512, 00:19:44.511 "num_blocks": 65536, 00:19:44.511 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:44.511 "assigned_rate_limits": { 00:19:44.511 "rw_ios_per_sec": 0, 00:19:44.511 "rw_mbytes_per_sec": 0, 00:19:44.511 "r_mbytes_per_sec": 0, 00:19:44.511 "w_mbytes_per_sec": 0 00:19:44.511 }, 00:19:44.511 "claimed": true, 00:19:44.511 "claim_type": "exclusive_write", 00:19:44.511 "zoned": false, 00:19:44.511 "supported_io_types": { 00:19:44.511 "read": true, 00:19:44.511 "write": true, 00:19:44.511 "unmap": true, 00:19:44.511 "flush": true, 00:19:44.511 "reset": true, 00:19:44.511 "nvme_admin": false, 00:19:44.511 "nvme_io": false, 00:19:44.511 "nvme_io_md": false, 00:19:44.511 "write_zeroes": true, 00:19:44.511 "zcopy": true, 00:19:44.511 "get_zone_info": false, 00:19:44.511 "zone_management": false, 00:19:44.511 "zone_append": false, 00:19:44.511 "compare": false, 00:19:44.511 "compare_and_write": false, 00:19:44.511 "abort": true, 00:19:44.511 "seek_hole": false, 00:19:44.511 "seek_data": false, 00:19:44.511 "copy": true, 00:19:44.511 "nvme_iov_md": false 00:19:44.511 }, 00:19:44.511 "memory_domains": [ 00:19:44.511 { 00:19:44.511 "dma_device_id": "system", 00:19:44.511 "dma_device_type": 1 00:19:44.511 }, 00:19:44.511 { 00:19:44.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.511 "dma_device_type": 2 00:19:44.511 } 00:19:44.511 ], 00:19:44.511 "driver_specific": {} 00:19:44.511 } 00:19:44.511 ] 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:44.511 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.512 "name": "Existed_Raid", 00:19:44.512 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:44.512 "strip_size_kb": 64, 00:19:44.512 "state": "online", 00:19:44.512 "raid_level": "raid0", 00:19:44.512 "superblock": true, 00:19:44.512 "num_base_bdevs": 3, 00:19:44.512 "num_base_bdevs_discovered": 3, 00:19:44.512 "num_base_bdevs_operational": 3, 00:19:44.512 "base_bdevs_list": [ 00:19:44.512 { 00:19:44.512 "name": "NewBaseBdev", 00:19:44.512 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:44.512 "is_configured": true, 00:19:44.512 "data_offset": 2048, 00:19:44.512 "data_size": 63488 00:19:44.512 }, 00:19:44.512 { 00:19:44.512 "name": "BaseBdev2", 00:19:44.512 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:44.512 "is_configured": true, 00:19:44.512 "data_offset": 2048, 00:19:44.512 "data_size": 63488 00:19:44.512 }, 00:19:44.512 { 00:19:44.512 "name": "BaseBdev3", 00:19:44.512 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:44.512 "is_configured": true, 00:19:44.512 "data_offset": 2048, 00:19:44.512 "data_size": 63488 00:19:44.512 } 00:19:44.512 ] 00:19:44.512 }' 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.512 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.773 [2024-11-08 17:07:21.399077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.773 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:44.773 "name": "Existed_Raid", 00:19:44.773 "aliases": [ 00:19:44.773 "fc00bf65-d915-4880-8893-6e6d30dabcd4" 00:19:44.773 ], 00:19:44.773 "product_name": "Raid Volume", 00:19:44.773 "block_size": 512, 00:19:44.773 "num_blocks": 190464, 00:19:44.773 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:44.773 "assigned_rate_limits": { 00:19:44.773 "rw_ios_per_sec": 0, 00:19:44.773 "rw_mbytes_per_sec": 0, 00:19:44.773 "r_mbytes_per_sec": 0, 00:19:44.773 "w_mbytes_per_sec": 0 00:19:44.773 }, 00:19:44.773 "claimed": false, 00:19:44.773 "zoned": false, 00:19:44.773 "supported_io_types": { 00:19:44.773 "read": true, 00:19:44.773 "write": true, 00:19:44.773 "unmap": true, 00:19:44.773 "flush": true, 00:19:44.773 "reset": true, 00:19:44.773 "nvme_admin": false, 00:19:44.773 "nvme_io": false, 00:19:44.773 "nvme_io_md": false, 00:19:44.773 "write_zeroes": true, 00:19:44.773 "zcopy": false, 00:19:44.773 "get_zone_info": false, 00:19:44.773 "zone_management": false, 00:19:44.773 "zone_append": false, 00:19:44.773 "compare": false, 00:19:44.773 "compare_and_write": false, 00:19:44.773 "abort": false, 00:19:44.773 "seek_hole": false, 00:19:44.773 "seek_data": false, 00:19:44.773 "copy": false, 00:19:44.773 "nvme_iov_md": false 00:19:44.773 }, 00:19:44.773 "memory_domains": [ 00:19:44.773 { 00:19:44.773 "dma_device_id": "system", 00:19:44.773 "dma_device_type": 1 00:19:44.773 }, 00:19:44.773 { 00:19:44.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.773 "dma_device_type": 2 00:19:44.773 }, 00:19:44.773 { 00:19:44.773 "dma_device_id": "system", 00:19:44.773 "dma_device_type": 1 00:19:44.773 }, 00:19:44.773 { 00:19:44.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.773 "dma_device_type": 2 00:19:44.773 }, 00:19:44.773 { 00:19:44.773 "dma_device_id": "system", 00:19:44.773 "dma_device_type": 1 00:19:44.773 }, 00:19:44.773 { 00:19:44.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.773 "dma_device_type": 2 00:19:44.773 } 00:19:44.773 ], 00:19:44.773 "driver_specific": { 00:19:44.773 "raid": { 00:19:44.774 "uuid": "fc00bf65-d915-4880-8893-6e6d30dabcd4", 00:19:44.774 "strip_size_kb": 64, 00:19:44.774 "state": "online", 00:19:44.774 "raid_level": "raid0", 00:19:44.774 "superblock": true, 00:19:44.774 "num_base_bdevs": 3, 00:19:44.774 "num_base_bdevs_discovered": 3, 00:19:44.774 "num_base_bdevs_operational": 3, 00:19:44.774 "base_bdevs_list": [ 00:19:44.774 { 00:19:44.774 "name": "NewBaseBdev", 00:19:44.774 "uuid": "6390b887-fad8-45c4-96aa-aa923cee0a17", 00:19:44.774 "is_configured": true, 00:19:44.774 "data_offset": 2048, 00:19:44.774 "data_size": 63488 00:19:44.774 }, 00:19:44.774 { 00:19:44.774 "name": "BaseBdev2", 00:19:44.774 "uuid": "1060758a-e2ab-4005-8e22-a87d089d57c4", 00:19:44.774 "is_configured": true, 00:19:44.774 "data_offset": 2048, 00:19:44.774 "data_size": 63488 00:19:44.774 }, 00:19:44.774 { 00:19:44.774 "name": "BaseBdev3", 00:19:44.774 "uuid": "91f37e4c-adbf-4135-8c03-b343f069afc7", 00:19:44.774 "is_configured": true, 00:19:44.774 "data_offset": 2048, 00:19:44.774 "data_size": 63488 00:19:44.774 } 00:19:44.774 ] 00:19:44.774 } 00:19:44.774 } 00:19:44.774 }' 00:19:44.774 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:44.774 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:44.774 BaseBdev2 00:19:44.774 BaseBdev3' 00:19:44.774 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.033 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:45.033 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.033 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:45.033 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.034 [2024-11-08 17:07:21.590686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:45.034 [2024-11-08 17:07:21.590867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.034 [2024-11-08 17:07:21.591052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.034 [2024-11-08 17:07:21.591182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.034 [2024-11-08 17:07:21.591257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63309 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63309 ']' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63309 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63309 00:19:45.034 killing process with pid 63309 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63309' 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63309 00:19:45.034 [2024-11-08 17:07:21.626259] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.034 17:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63309 00:19:45.296 [2024-11-08 17:07:21.847614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.241 17:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:46.241 00:19:46.241 real 0m8.270s 00:19:46.241 user 0m12.640s 00:19:46.241 sys 0m1.662s 00:19:46.241 17:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:46.241 ************************************ 00:19:46.241 END TEST raid_state_function_test_sb 00:19:46.241 ************************************ 00:19:46.241 17:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.241 17:07:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:19:46.241 17:07:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:46.241 17:07:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:46.241 17:07:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.241 ************************************ 00:19:46.241 START TEST raid_superblock_test 00:19:46.241 ************************************ 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63907 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63907 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63907 ']' 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:46.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:46.241 17:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.242 [2024-11-08 17:07:22.879179] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:46.242 [2024-11-08 17:07:22.879621] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63907 ] 00:19:46.504 [2024-11-08 17:07:23.045570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.504 [2024-11-08 17:07:23.204685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.767 [2024-11-08 17:07:23.383474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.767 [2024-11-08 17:07:23.383584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.338 malloc1 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.338 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.338 [2024-11-08 17:07:23.819378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:47.338 [2024-11-08 17:07:23.819687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.338 [2024-11-08 17:07:23.819728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:47.338 [2024-11-08 17:07:23.819740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.338 [2024-11-08 17:07:23.822723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.339 [2024-11-08 17:07:23.822809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:47.339 pt1 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 malloc2 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 [2024-11-08 17:07:23.872424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:47.339 [2024-11-08 17:07:23.872684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.339 [2024-11-08 17:07:23.872746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:47.339 [2024-11-08 17:07:23.872853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.339 [2024-11-08 17:07:23.875719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.339 [2024-11-08 17:07:23.875916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:47.339 pt2 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 malloc3 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 [2024-11-08 17:07:23.939622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:47.339 [2024-11-08 17:07:23.939905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.339 [2024-11-08 17:07:23.939971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:47.339 [2024-11-08 17:07:23.940046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.339 [2024-11-08 17:07:23.942943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.339 [2024-11-08 17:07:23.943120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:47.339 pt3 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 [2024-11-08 17:07:23.951712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:47.339 [2024-11-08 17:07:23.954119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:47.339 [2024-11-08 17:07:23.954343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:47.339 [2024-11-08 17:07:23.954555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:47.339 [2024-11-08 17:07:23.954572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:47.339 [2024-11-08 17:07:23.954958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:47.339 [2024-11-08 17:07:23.955152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:47.339 [2024-11-08 17:07:23.955163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:47.339 [2024-11-08 17:07:23.955348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.339 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.339 "name": "raid_bdev1", 00:19:47.340 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:47.340 "strip_size_kb": 64, 00:19:47.340 "state": "online", 00:19:47.340 "raid_level": "raid0", 00:19:47.340 "superblock": true, 00:19:47.340 "num_base_bdevs": 3, 00:19:47.340 "num_base_bdevs_discovered": 3, 00:19:47.340 "num_base_bdevs_operational": 3, 00:19:47.340 "base_bdevs_list": [ 00:19:47.340 { 00:19:47.340 "name": "pt1", 00:19:47.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:47.340 "is_configured": true, 00:19:47.340 "data_offset": 2048, 00:19:47.340 "data_size": 63488 00:19:47.340 }, 00:19:47.340 { 00:19:47.340 "name": "pt2", 00:19:47.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.340 "is_configured": true, 00:19:47.340 "data_offset": 2048, 00:19:47.340 "data_size": 63488 00:19:47.340 }, 00:19:47.340 { 00:19:47.340 "name": "pt3", 00:19:47.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:47.340 "is_configured": true, 00:19:47.340 "data_offset": 2048, 00:19:47.340 "data_size": 63488 00:19:47.340 } 00:19:47.340 ] 00:19:47.340 }' 00:19:47.340 17:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.340 17:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:47.620 [2024-11-08 17:07:24.300138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.620 "name": "raid_bdev1", 00:19:47.620 "aliases": [ 00:19:47.620 "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7" 00:19:47.620 ], 00:19:47.620 "product_name": "Raid Volume", 00:19:47.620 "block_size": 512, 00:19:47.620 "num_blocks": 190464, 00:19:47.620 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:47.620 "assigned_rate_limits": { 00:19:47.620 "rw_ios_per_sec": 0, 00:19:47.620 "rw_mbytes_per_sec": 0, 00:19:47.620 "r_mbytes_per_sec": 0, 00:19:47.620 "w_mbytes_per_sec": 0 00:19:47.620 }, 00:19:47.620 "claimed": false, 00:19:47.620 "zoned": false, 00:19:47.620 "supported_io_types": { 00:19:47.620 "read": true, 00:19:47.620 "write": true, 00:19:47.620 "unmap": true, 00:19:47.620 "flush": true, 00:19:47.620 "reset": true, 00:19:47.620 "nvme_admin": false, 00:19:47.620 "nvme_io": false, 00:19:47.620 "nvme_io_md": false, 00:19:47.620 "write_zeroes": true, 00:19:47.620 "zcopy": false, 00:19:47.620 "get_zone_info": false, 00:19:47.620 "zone_management": false, 00:19:47.620 "zone_append": false, 00:19:47.620 "compare": false, 00:19:47.620 "compare_and_write": false, 00:19:47.620 "abort": false, 00:19:47.620 "seek_hole": false, 00:19:47.620 "seek_data": false, 00:19:47.620 "copy": false, 00:19:47.620 "nvme_iov_md": false 00:19:47.620 }, 00:19:47.620 "memory_domains": [ 00:19:47.620 { 00:19:47.620 "dma_device_id": "system", 00:19:47.620 "dma_device_type": 1 00:19:47.620 }, 00:19:47.620 { 00:19:47.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.620 "dma_device_type": 2 00:19:47.620 }, 00:19:47.620 { 00:19:47.620 "dma_device_id": "system", 00:19:47.620 "dma_device_type": 1 00:19:47.620 }, 00:19:47.620 { 00:19:47.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.620 "dma_device_type": 2 00:19:47.620 }, 00:19:47.620 { 00:19:47.620 "dma_device_id": "system", 00:19:47.620 "dma_device_type": 1 00:19:47.620 }, 00:19:47.620 { 00:19:47.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.620 "dma_device_type": 2 00:19:47.620 } 00:19:47.620 ], 00:19:47.620 "driver_specific": { 00:19:47.620 "raid": { 00:19:47.620 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:47.620 "strip_size_kb": 64, 00:19:47.620 "state": "online", 00:19:47.620 "raid_level": "raid0", 00:19:47.620 "superblock": true, 00:19:47.620 "num_base_bdevs": 3, 00:19:47.620 "num_base_bdevs_discovered": 3, 00:19:47.620 "num_base_bdevs_operational": 3, 00:19:47.620 "base_bdevs_list": [ 00:19:47.620 { 00:19:47.620 "name": "pt1", 00:19:47.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:47.620 "is_configured": true, 00:19:47.620 "data_offset": 2048, 00:19:47.620 "data_size": 63488 00:19:47.620 }, 00:19:47.620 { 00:19:47.620 "name": "pt2", 00:19:47.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.620 "is_configured": true, 00:19:47.620 "data_offset": 2048, 00:19:47.620 "data_size": 63488 00:19:47.620 }, 00:19:47.620 { 00:19:47.620 "name": "pt3", 00:19:47.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:47.620 "is_configured": true, 00:19:47.620 "data_offset": 2048, 00:19:47.620 "data_size": 63488 00:19:47.620 } 00:19:47.620 ] 00:19:47.620 } 00:19:47.620 } 00:19:47.620 }' 00:19:47.620 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:47.881 pt2 00:19:47.881 pt3' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.881 [2024-11-08 17:07:24.516091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7 ']' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.881 [2024-11-08 17:07:24.547742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.881 [2024-11-08 17:07:24.547920] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.881 [2024-11-08 17:07:24.548096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.881 [2024-11-08 17:07:24.548211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.881 [2024-11-08 17:07:24.548443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.881 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 [2024-11-08 17:07:24.659884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:48.142 [2024-11-08 17:07:24.662551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:48.142 [2024-11-08 17:07:24.662623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:48.142 [2024-11-08 17:07:24.662694] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:48.142 [2024-11-08 17:07:24.662800] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:48.142 [2024-11-08 17:07:24.662823] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:48.142 [2024-11-08 17:07:24.662843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.142 [2024-11-08 17:07:24.662858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:48.142 request: 00:19:48.142 { 00:19:48.142 "name": "raid_bdev1", 00:19:48.142 "raid_level": "raid0", 00:19:48.142 "base_bdevs": [ 00:19:48.142 "malloc1", 00:19:48.142 "malloc2", 00:19:48.142 "malloc3" 00:19:48.142 ], 00:19:48.142 "strip_size_kb": 64, 00:19:48.142 "superblock": false, 00:19:48.142 "method": "bdev_raid_create", 00:19:48.142 "req_id": 1 00:19:48.142 } 00:19:48.142 Got JSON-RPC error response 00:19:48.142 response: 00:19:48.142 { 00:19:48.142 "code": -17, 00:19:48.142 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:48.142 } 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.142 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.143 [2024-11-08 17:07:24.707773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:48.143 [2024-11-08 17:07:24.707975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.143 [2024-11-08 17:07:24.708028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:48.143 [2024-11-08 17:07:24.708087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.143 [2024-11-08 17:07:24.710980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.143 [2024-11-08 17:07:24.711143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:48.143 [2024-11-08 17:07:24.711272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:48.143 [2024-11-08 17:07:24.711343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:48.143 pt1 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.143 "name": "raid_bdev1", 00:19:48.143 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:48.143 "strip_size_kb": 64, 00:19:48.143 "state": "configuring", 00:19:48.143 "raid_level": "raid0", 00:19:48.143 "superblock": true, 00:19:48.143 "num_base_bdevs": 3, 00:19:48.143 "num_base_bdevs_discovered": 1, 00:19:48.143 "num_base_bdevs_operational": 3, 00:19:48.143 "base_bdevs_list": [ 00:19:48.143 { 00:19:48.143 "name": "pt1", 00:19:48.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.143 "is_configured": true, 00:19:48.143 "data_offset": 2048, 00:19:48.143 "data_size": 63488 00:19:48.143 }, 00:19:48.143 { 00:19:48.143 "name": null, 00:19:48.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.143 "is_configured": false, 00:19:48.143 "data_offset": 2048, 00:19:48.143 "data_size": 63488 00:19:48.143 }, 00:19:48.143 { 00:19:48.143 "name": null, 00:19:48.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:48.143 "is_configured": false, 00:19:48.143 "data_offset": 2048, 00:19:48.143 "data_size": 63488 00:19:48.143 } 00:19:48.143 ] 00:19:48.143 }' 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.143 17:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.410 [2024-11-08 17:07:25.043921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:48.410 [2024-11-08 17:07:25.044152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.410 [2024-11-08 17:07:25.044192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:48.410 [2024-11-08 17:07:25.044204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.410 [2024-11-08 17:07:25.044856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.410 [2024-11-08 17:07:25.044877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:48.410 [2024-11-08 17:07:25.045002] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:48.410 [2024-11-08 17:07:25.045032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:48.410 pt2 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.410 [2024-11-08 17:07:25.051917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.410 "name": "raid_bdev1", 00:19:48.410 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:48.410 "strip_size_kb": 64, 00:19:48.410 "state": "configuring", 00:19:48.410 "raid_level": "raid0", 00:19:48.410 "superblock": true, 00:19:48.410 "num_base_bdevs": 3, 00:19:48.410 "num_base_bdevs_discovered": 1, 00:19:48.410 "num_base_bdevs_operational": 3, 00:19:48.410 "base_bdevs_list": [ 00:19:48.410 { 00:19:48.410 "name": "pt1", 00:19:48.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.410 "is_configured": true, 00:19:48.410 "data_offset": 2048, 00:19:48.410 "data_size": 63488 00:19:48.410 }, 00:19:48.410 { 00:19:48.410 "name": null, 00:19:48.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.410 "is_configured": false, 00:19:48.410 "data_offset": 0, 00:19:48.410 "data_size": 63488 00:19:48.410 }, 00:19:48.410 { 00:19:48.410 "name": null, 00:19:48.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:48.410 "is_configured": false, 00:19:48.410 "data_offset": 2048, 00:19:48.410 "data_size": 63488 00:19:48.410 } 00:19:48.410 ] 00:19:48.410 }' 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.410 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.699 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:48.699 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:48.699 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:48.699 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.699 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.699 [2024-11-08 17:07:25.411980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:48.959 [2024-11-08 17:07:25.412257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.959 [2024-11-08 17:07:25.412294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:48.959 [2024-11-08 17:07:25.412308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.959 [2024-11-08 17:07:25.412987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.959 [2024-11-08 17:07:25.413021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:48.959 [2024-11-08 17:07:25.413139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:48.959 [2024-11-08 17:07:25.413172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:48.959 pt2 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.959 [2024-11-08 17:07:25.423944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:48.959 [2024-11-08 17:07:25.424167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.959 [2024-11-08 17:07:25.424213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:48.959 [2024-11-08 17:07:25.424286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.959 [2024-11-08 17:07:25.424895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.959 [2024-11-08 17:07:25.425052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:48.959 [2024-11-08 17:07:25.425222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:48.959 [2024-11-08 17:07:25.425276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:48.959 [2024-11-08 17:07:25.425458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:48.959 [2024-11-08 17:07:25.425489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:48.959 [2024-11-08 17:07:25.426985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:48.959 [2024-11-08 17:07:25.427861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:48.959 [2024-11-08 17:07:25.428133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:48.959 [2024-11-08 17:07:25.428940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.959 pt3 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.959 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.960 "name": "raid_bdev1", 00:19:48.960 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:48.960 "strip_size_kb": 64, 00:19:48.960 "state": "online", 00:19:48.960 "raid_level": "raid0", 00:19:48.960 "superblock": true, 00:19:48.960 "num_base_bdevs": 3, 00:19:48.960 "num_base_bdevs_discovered": 3, 00:19:48.960 "num_base_bdevs_operational": 3, 00:19:48.960 "base_bdevs_list": [ 00:19:48.960 { 00:19:48.960 "name": "pt1", 00:19:48.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:48.960 "is_configured": true, 00:19:48.960 "data_offset": 2048, 00:19:48.960 "data_size": 63488 00:19:48.960 }, 00:19:48.960 { 00:19:48.960 "name": "pt2", 00:19:48.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.960 "is_configured": true, 00:19:48.960 "data_offset": 2048, 00:19:48.960 "data_size": 63488 00:19:48.960 }, 00:19:48.960 { 00:19:48.960 "name": "pt3", 00:19:48.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:48.960 "is_configured": true, 00:19:48.960 "data_offset": 2048, 00:19:48.960 "data_size": 63488 00:19:48.960 } 00:19:48.960 ] 00:19:48.960 }' 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.960 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:49.221 [2024-11-08 17:07:25.765077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.221 "name": "raid_bdev1", 00:19:49.221 "aliases": [ 00:19:49.221 "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7" 00:19:49.221 ], 00:19:49.221 "product_name": "Raid Volume", 00:19:49.221 "block_size": 512, 00:19:49.221 "num_blocks": 190464, 00:19:49.221 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:49.221 "assigned_rate_limits": { 00:19:49.221 "rw_ios_per_sec": 0, 00:19:49.221 "rw_mbytes_per_sec": 0, 00:19:49.221 "r_mbytes_per_sec": 0, 00:19:49.221 "w_mbytes_per_sec": 0 00:19:49.221 }, 00:19:49.221 "claimed": false, 00:19:49.221 "zoned": false, 00:19:49.221 "supported_io_types": { 00:19:49.221 "read": true, 00:19:49.221 "write": true, 00:19:49.221 "unmap": true, 00:19:49.221 "flush": true, 00:19:49.221 "reset": true, 00:19:49.221 "nvme_admin": false, 00:19:49.221 "nvme_io": false, 00:19:49.221 "nvme_io_md": false, 00:19:49.221 "write_zeroes": true, 00:19:49.221 "zcopy": false, 00:19:49.221 "get_zone_info": false, 00:19:49.221 "zone_management": false, 00:19:49.221 "zone_append": false, 00:19:49.221 "compare": false, 00:19:49.221 "compare_and_write": false, 00:19:49.221 "abort": false, 00:19:49.221 "seek_hole": false, 00:19:49.221 "seek_data": false, 00:19:49.221 "copy": false, 00:19:49.221 "nvme_iov_md": false 00:19:49.221 }, 00:19:49.221 "memory_domains": [ 00:19:49.221 { 00:19:49.221 "dma_device_id": "system", 00:19:49.221 "dma_device_type": 1 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.221 "dma_device_type": 2 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "dma_device_id": "system", 00:19:49.221 "dma_device_type": 1 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.221 "dma_device_type": 2 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "dma_device_id": "system", 00:19:49.221 "dma_device_type": 1 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.221 "dma_device_type": 2 00:19:49.221 } 00:19:49.221 ], 00:19:49.221 "driver_specific": { 00:19:49.221 "raid": { 00:19:49.221 "uuid": "c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7", 00:19:49.221 "strip_size_kb": 64, 00:19:49.221 "state": "online", 00:19:49.221 "raid_level": "raid0", 00:19:49.221 "superblock": true, 00:19:49.221 "num_base_bdevs": 3, 00:19:49.221 "num_base_bdevs_discovered": 3, 00:19:49.221 "num_base_bdevs_operational": 3, 00:19:49.221 "base_bdevs_list": [ 00:19:49.221 { 00:19:49.221 "name": "pt1", 00:19:49.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:49.221 "is_configured": true, 00:19:49.221 "data_offset": 2048, 00:19:49.221 "data_size": 63488 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "name": "pt2", 00:19:49.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:49.221 "is_configured": true, 00:19:49.221 "data_offset": 2048, 00:19:49.221 "data_size": 63488 00:19:49.221 }, 00:19:49.221 { 00:19:49.221 "name": "pt3", 00:19:49.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:49.221 "is_configured": true, 00:19:49.221 "data_offset": 2048, 00:19:49.221 "data_size": 63488 00:19:49.221 } 00:19:49.221 ] 00:19:49.221 } 00:19:49.221 } 00:19:49.221 }' 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:49.221 pt2 00:19:49.221 pt3' 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.221 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.222 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.483 [2024-11-08 17:07:25.961010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7 '!=' c9ed34d3-88d7-42c2-9afc-ca585d0c7ef7 ']' 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63907 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63907 ']' 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63907 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.483 17:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63907 00:19:49.483 killing process with pid 63907 00:19:49.483 17:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:49.483 17:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:49.483 17:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63907' 00:19:49.483 17:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63907 00:19:49.483 17:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63907 00:19:49.483 [2024-11-08 17:07:26.019005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.483 [2024-11-08 17:07:26.019168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.483 [2024-11-08 17:07:26.019260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.483 [2024-11-08 17:07:26.019276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:49.743 [2024-11-08 17:07:26.244084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.715 ************************************ 00:19:50.715 END TEST raid_superblock_test 00:19:50.715 ************************************ 00:19:50.715 17:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:50.715 00:19:50.715 real 0m4.326s 00:19:50.715 user 0m5.938s 00:19:50.715 sys 0m0.825s 00:19:50.715 17:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:50.715 17:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.715 17:07:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:19:50.715 17:07:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:50.715 17:07:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:50.715 17:07:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.715 ************************************ 00:19:50.715 START TEST raid_read_error_test 00:19:50.715 ************************************ 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:50.715 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.38Cp1xl5CN 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64149 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64149 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 64149 ']' 00:19:50.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.716 17:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.716 [2024-11-08 17:07:27.324347] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:50.716 [2024-11-08 17:07:27.324905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64149 ] 00:19:50.976 [2024-11-08 17:07:27.512892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.976 [2024-11-08 17:07:27.679371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.243 [2024-11-08 17:07:27.862042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.243 [2024-11-08 17:07:27.862157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.838 BaseBdev1_malloc 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.838 true 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.838 [2024-11-08 17:07:28.324084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:51.838 [2024-11-08 17:07:28.324421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.838 [2024-11-08 17:07:28.324488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:51.838 [2024-11-08 17:07:28.324582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.838 [2024-11-08 17:07:28.327866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.838 [2024-11-08 17:07:28.327932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:51.838 BaseBdev1 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.838 BaseBdev2_malloc 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.838 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.839 true 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.839 [2024-11-08 17:07:28.394352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:51.839 [2024-11-08 17:07:28.394455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.839 [2024-11-08 17:07:28.394483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:51.839 [2024-11-08 17:07:28.394498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.839 [2024-11-08 17:07:28.397479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.839 [2024-11-08 17:07:28.397547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:51.839 BaseBdev2 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.839 BaseBdev3_malloc 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.839 true 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.839 [2024-11-08 17:07:28.465210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:51.839 [2024-11-08 17:07:28.465308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.839 [2024-11-08 17:07:28.465336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:51.839 [2024-11-08 17:07:28.465350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.839 [2024-11-08 17:07:28.468341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.839 [2024-11-08 17:07:28.468403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:51.839 BaseBdev3 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.839 [2024-11-08 17:07:28.473365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.839 [2024-11-08 17:07:28.476054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.839 [2024-11-08 17:07:28.476311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.839 [2024-11-08 17:07:28.476621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:51.839 [2024-11-08 17:07:28.476667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:51.839 [2024-11-08 17:07:28.477236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:19:51.839 [2024-11-08 17:07:28.477551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:51.839 [2024-11-08 17:07:28.477599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:51.839 [2024-11-08 17:07:28.478125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.839 "name": "raid_bdev1", 00:19:51.839 "uuid": "18758bb2-1ed2-4d2b-96b9-4abe1bff4174", 00:19:51.839 "strip_size_kb": 64, 00:19:51.839 "state": "online", 00:19:51.839 "raid_level": "raid0", 00:19:51.839 "superblock": true, 00:19:51.839 "num_base_bdevs": 3, 00:19:51.839 "num_base_bdevs_discovered": 3, 00:19:51.839 "num_base_bdevs_operational": 3, 00:19:51.839 "base_bdevs_list": [ 00:19:51.839 { 00:19:51.839 "name": "BaseBdev1", 00:19:51.839 "uuid": "2303a02e-7e45-595a-ae5c-9e6b68fdc40e", 00:19:51.839 "is_configured": true, 00:19:51.839 "data_offset": 2048, 00:19:51.839 "data_size": 63488 00:19:51.839 }, 00:19:51.839 { 00:19:51.839 "name": "BaseBdev2", 00:19:51.839 "uuid": "72011be6-003a-5430-bf5d-276bb2d670b8", 00:19:51.839 "is_configured": true, 00:19:51.839 "data_offset": 2048, 00:19:51.839 "data_size": 63488 00:19:51.839 }, 00:19:51.839 { 00:19:51.839 "name": "BaseBdev3", 00:19:51.839 "uuid": "09a42ac2-c832-5000-b85e-367b512304c2", 00:19:51.839 "is_configured": true, 00:19:51.839 "data_offset": 2048, 00:19:51.839 "data_size": 63488 00:19:51.839 } 00:19:51.839 ] 00:19:51.839 }' 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.839 17:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.409 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:52.409 17:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:52.409 [2024-11-08 17:07:28.919784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.389 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.390 "name": "raid_bdev1", 00:19:53.390 "uuid": "18758bb2-1ed2-4d2b-96b9-4abe1bff4174", 00:19:53.390 "strip_size_kb": 64, 00:19:53.390 "state": "online", 00:19:53.390 "raid_level": "raid0", 00:19:53.390 "superblock": true, 00:19:53.390 "num_base_bdevs": 3, 00:19:53.390 "num_base_bdevs_discovered": 3, 00:19:53.390 "num_base_bdevs_operational": 3, 00:19:53.390 "base_bdevs_list": [ 00:19:53.390 { 00:19:53.390 "name": "BaseBdev1", 00:19:53.390 "uuid": "2303a02e-7e45-595a-ae5c-9e6b68fdc40e", 00:19:53.390 "is_configured": true, 00:19:53.390 "data_offset": 2048, 00:19:53.390 "data_size": 63488 00:19:53.390 }, 00:19:53.390 { 00:19:53.390 "name": "BaseBdev2", 00:19:53.390 "uuid": "72011be6-003a-5430-bf5d-276bb2d670b8", 00:19:53.390 "is_configured": true, 00:19:53.390 "data_offset": 2048, 00:19:53.390 "data_size": 63488 00:19:53.390 }, 00:19:53.390 { 00:19:53.390 "name": "BaseBdev3", 00:19:53.390 "uuid": "09a42ac2-c832-5000-b85e-367b512304c2", 00:19:53.390 "is_configured": true, 00:19:53.390 "data_offset": 2048, 00:19:53.390 "data_size": 63488 00:19:53.390 } 00:19:53.390 ] 00:19:53.390 }' 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.390 17:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 [2024-11-08 17:07:30.147791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.652 [2024-11-08 17:07:30.148011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.652 [2024-11-08 17:07:30.151374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.652 [2024-11-08 17:07:30.151582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.652 [2024-11-08 17:07:30.151668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:19:53.652 "results": [ 00:19:53.652 { 00:19:53.652 "job": "raid_bdev1", 00:19:53.652 "core_mask": "0x1", 00:19:53.652 "workload": "randrw", 00:19:53.652 "percentage": 50, 00:19:53.652 "status": "finished", 00:19:53.652 "queue_depth": 1, 00:19:53.652 "io_size": 131072, 00:19:53.652 "runtime": 1.2255, 00:19:53.652 "iops": 11645.042839657282, 00:19:53.652 "mibps": 1455.6303549571603, 00:19:53.652 "io_failed": 1, 00:19:53.652 "io_timeout": 0, 00:19:53.652 "avg_latency_us": 120.18245601931702, 00:19:53.652 "min_latency_us": 26.978461538461538, 00:19:53.652 "max_latency_us": 1726.6215384615384 00:19:53.652 } 00:19:53.652 ], 00:19:53.652 "core_count": 1 00:19:53.652 } 00:19:53.652 ee all in destruct 00:19:53.652 [2024-11-08 17:07:30.151792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64149 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 64149 ']' 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 64149 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64149 00:19:53.652 killing process with pid 64149 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64149' 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 64149 00:19:53.652 17:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 64149 00:19:53.652 [2024-11-08 17:07:30.185125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.652 [2024-11-08 17:07:30.361071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.595 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.38Cp1xl5CN 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:54.596 ************************************ 00:19:54.596 END TEST raid_read_error_test 00:19:54.596 ************************************ 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:19:54.596 00:19:54.596 real 0m4.105s 00:19:54.596 user 0m4.679s 00:19:54.596 sys 0m0.662s 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:54.596 17:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.857 17:07:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:19:54.857 17:07:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:54.857 17:07:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:54.857 17:07:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.857 ************************************ 00:19:54.857 START TEST raid_write_error_test 00:19:54.857 ************************************ 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:54.857 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q5bTseEcEZ 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64289 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64289 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 64289 ']' 00:19:54.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:54.858 17:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.858 [2024-11-08 17:07:31.478251] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:54.858 [2024-11-08 17:07:31.478706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64289 ] 00:19:55.119 [2024-11-08 17:07:31.645079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.119 [2024-11-08 17:07:31.808175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.378 [2024-11-08 17:07:31.989123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.378 [2024-11-08 17:07:31.989219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 BaseBdev1_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 true 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 [2024-11-08 17:07:32.416095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:55.946 [2024-11-08 17:07:32.416355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.946 [2024-11-08 17:07:32.416395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:55.946 [2024-11-08 17:07:32.416408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.946 [2024-11-08 17:07:32.419258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.946 [2024-11-08 17:07:32.419465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:55.946 BaseBdev1 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 BaseBdev2_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 true 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 [2024-11-08 17:07:32.477200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:55.946 [2024-11-08 17:07:32.477432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.946 [2024-11-08 17:07:32.477481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:55.946 [2024-11-08 17:07:32.477543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.946 [2024-11-08 17:07:32.480275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.946 [2024-11-08 17:07:32.480473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:55.946 BaseBdev2 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 BaseBdev3_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 true 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 [2024-11-08 17:07:32.547645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:55.946 [2024-11-08 17:07:32.547736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.946 [2024-11-08 17:07:32.547774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:55.946 [2024-11-08 17:07:32.547788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.946 [2024-11-08 17:07:32.550584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.946 [2024-11-08 17:07:32.550645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:55.946 BaseBdev3 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 [2024-11-08 17:07:32.555796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.946 [2024-11-08 17:07:32.558278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.946 [2024-11-08 17:07:32.558520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:55.946 [2024-11-08 17:07:32.558797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:55.946 [2024-11-08 17:07:32.558818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:55.946 [2024-11-08 17:07:32.559134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:19:55.946 [2024-11-08 17:07:32.559316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:55.946 [2024-11-08 17:07:32.559331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:55.946 [2024-11-08 17:07:32.559495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:55.946 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.947 "name": "raid_bdev1", 00:19:55.947 "uuid": "d0a430e7-d28b-4452-9def-2c619c8ff5ab", 00:19:55.947 "strip_size_kb": 64, 00:19:55.947 "state": "online", 00:19:55.947 "raid_level": "raid0", 00:19:55.947 "superblock": true, 00:19:55.947 "num_base_bdevs": 3, 00:19:55.947 "num_base_bdevs_discovered": 3, 00:19:55.947 "num_base_bdevs_operational": 3, 00:19:55.947 "base_bdevs_list": [ 00:19:55.947 { 00:19:55.947 "name": "BaseBdev1", 00:19:55.947 "uuid": "fd03a48d-98ee-58d6-aa05-c0357065d6f3", 00:19:55.947 "is_configured": true, 00:19:55.947 "data_offset": 2048, 00:19:55.947 "data_size": 63488 00:19:55.947 }, 00:19:55.947 { 00:19:55.947 "name": "BaseBdev2", 00:19:55.947 "uuid": "ed6571ae-d1f5-5750-9974-a6b7dca82b05", 00:19:55.947 "is_configured": true, 00:19:55.947 "data_offset": 2048, 00:19:55.947 "data_size": 63488 00:19:55.947 }, 00:19:55.947 { 00:19:55.947 "name": "BaseBdev3", 00:19:55.947 "uuid": "9fd2ff77-a2a9-59e1-ba3f-919df43e3f67", 00:19:55.947 "is_configured": true, 00:19:55.947 "data_offset": 2048, 00:19:55.947 "data_size": 63488 00:19:55.947 } 00:19:55.947 ] 00:19:55.947 }' 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.947 17:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.233 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:56.233 17:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:56.502 [2024-11-08 17:07:32.977156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:57.442 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.443 "name": "raid_bdev1", 00:19:57.443 "uuid": "d0a430e7-d28b-4452-9def-2c619c8ff5ab", 00:19:57.443 "strip_size_kb": 64, 00:19:57.443 "state": "online", 00:19:57.443 "raid_level": "raid0", 00:19:57.443 "superblock": true, 00:19:57.443 "num_base_bdevs": 3, 00:19:57.443 "num_base_bdevs_discovered": 3, 00:19:57.443 "num_base_bdevs_operational": 3, 00:19:57.443 "base_bdevs_list": [ 00:19:57.443 { 00:19:57.443 "name": "BaseBdev1", 00:19:57.443 "uuid": "fd03a48d-98ee-58d6-aa05-c0357065d6f3", 00:19:57.443 "is_configured": true, 00:19:57.443 "data_offset": 2048, 00:19:57.443 "data_size": 63488 00:19:57.443 }, 00:19:57.443 { 00:19:57.443 "name": "BaseBdev2", 00:19:57.443 "uuid": "ed6571ae-d1f5-5750-9974-a6b7dca82b05", 00:19:57.443 "is_configured": true, 00:19:57.443 "data_offset": 2048, 00:19:57.443 "data_size": 63488 00:19:57.443 }, 00:19:57.443 { 00:19:57.443 "name": "BaseBdev3", 00:19:57.443 "uuid": "9fd2ff77-a2a9-59e1-ba3f-919df43e3f67", 00:19:57.443 "is_configured": true, 00:19:57.443 "data_offset": 2048, 00:19:57.443 "data_size": 63488 00:19:57.443 } 00:19:57.443 ] 00:19:57.443 }' 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.443 17:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.748 [2024-11-08 17:07:34.272649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.748 [2024-11-08 17:07:34.272700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.748 [2024-11-08 17:07:34.275989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.748 [2024-11-08 17:07:34.276063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.748 [2024-11-08 17:07:34.276114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.748 [2024-11-08 17:07:34.276127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:57.748 { 00:19:57.748 "results": [ 00:19:57.748 { 00:19:57.748 "job": "raid_bdev1", 00:19:57.748 "core_mask": "0x1", 00:19:57.748 "workload": "randrw", 00:19:57.748 "percentage": 50, 00:19:57.748 "status": "finished", 00:19:57.748 "queue_depth": 1, 00:19:57.748 "io_size": 131072, 00:19:57.748 "runtime": 1.292756, 00:19:57.748 "iops": 11593.835186222303, 00:19:57.748 "mibps": 1449.2293982777878, 00:19:57.748 "io_failed": 1, 00:19:57.748 "io_timeout": 0, 00:19:57.748 "avg_latency_us": 120.62785242511174, 00:19:57.748 "min_latency_us": 28.947692307692307, 00:19:57.748 "max_latency_us": 1751.8276923076924 00:19:57.748 } 00:19:57.748 ], 00:19:57.748 "core_count": 1 00:19:57.748 } 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64289 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 64289 ']' 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 64289 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64289 00:19:57.748 killing process with pid 64289 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64289' 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 64289 00:19:57.748 17:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 64289 00:19:57.748 [2024-11-08 17:07:34.310498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.012 [2024-11-08 17:07:34.484883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.948 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:58.948 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q5bTseEcEZ 00:19:58.948 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:58.949 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:19:58.949 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:58.949 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:58.949 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:58.949 ************************************ 00:19:58.949 END TEST raid_write_error_test 00:19:58.949 ************************************ 00:19:58.949 17:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:19:58.949 00:19:58.949 real 0m4.037s 00:19:58.949 user 0m4.591s 00:19:58.949 sys 0m0.629s 00:19:58.949 17:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:58.949 17:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 17:07:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:58.949 17:07:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:19:58.949 17:07:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:58.949 17:07:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:58.949 17:07:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 ************************************ 00:19:58.949 START TEST raid_state_function_test 00:19:58.949 ************************************ 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:58.949 Process raid pid: 64427 00:19:58.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64427 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64427' 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64427 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 64427 ']' 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.949 17:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 [2024-11-08 17:07:35.578779] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:19:58.949 [2024-11-08 17:07:35.579001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.207 [2024-11-08 17:07:35.753792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.469 [2024-11-08 17:07:35.924463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.469 [2024-11-08 17:07:36.111440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.469 [2024-11-08 17:07:36.111524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.040 [2024-11-08 17:07:36.509979] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.040 [2024-11-08 17:07:36.510259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.040 [2024-11-08 17:07:36.510369] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.040 [2024-11-08 17:07:36.510402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.040 [2024-11-08 17:07:36.510422] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:00.040 [2024-11-08 17:07:36.510434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.040 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.040 "name": "Existed_Raid", 00:20:00.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.041 "strip_size_kb": 64, 00:20:00.041 "state": "configuring", 00:20:00.041 "raid_level": "concat", 00:20:00.041 "superblock": false, 00:20:00.041 "num_base_bdevs": 3, 00:20:00.041 "num_base_bdevs_discovered": 0, 00:20:00.041 "num_base_bdevs_operational": 3, 00:20:00.041 "base_bdevs_list": [ 00:20:00.041 { 00:20:00.041 "name": "BaseBdev1", 00:20:00.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.041 "is_configured": false, 00:20:00.041 "data_offset": 0, 00:20:00.041 "data_size": 0 00:20:00.041 }, 00:20:00.041 { 00:20:00.041 "name": "BaseBdev2", 00:20:00.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.041 "is_configured": false, 00:20:00.041 "data_offset": 0, 00:20:00.041 "data_size": 0 00:20:00.041 }, 00:20:00.041 { 00:20:00.041 "name": "BaseBdev3", 00:20:00.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.041 "is_configured": false, 00:20:00.041 "data_offset": 0, 00:20:00.041 "data_size": 0 00:20:00.041 } 00:20:00.041 ] 00:20:00.041 }' 00:20:00.041 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.041 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.302 [2024-11-08 17:07:36.862531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.302 [2024-11-08 17:07:36.862768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.302 [2024-11-08 17:07:36.870547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.302 [2024-11-08 17:07:36.870802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.302 [2024-11-08 17:07:36.870888] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.302 [2024-11-08 17:07:36.870923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.302 [2024-11-08 17:07:36.870946] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:00.302 [2024-11-08 17:07:36.870972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.302 [2024-11-08 17:07:36.914630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.302 BaseBdev1 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.302 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.302 [ 00:20:00.302 { 00:20:00.302 "name": "BaseBdev1", 00:20:00.302 "aliases": [ 00:20:00.303 "f0f95a8d-e9b8-4e9f-8543-22bc71b07146" 00:20:00.303 ], 00:20:00.303 "product_name": "Malloc disk", 00:20:00.303 "block_size": 512, 00:20:00.303 "num_blocks": 65536, 00:20:00.303 "uuid": "f0f95a8d-e9b8-4e9f-8543-22bc71b07146", 00:20:00.303 "assigned_rate_limits": { 00:20:00.303 "rw_ios_per_sec": 0, 00:20:00.303 "rw_mbytes_per_sec": 0, 00:20:00.303 "r_mbytes_per_sec": 0, 00:20:00.303 "w_mbytes_per_sec": 0 00:20:00.303 }, 00:20:00.303 "claimed": true, 00:20:00.303 "claim_type": "exclusive_write", 00:20:00.303 "zoned": false, 00:20:00.303 "supported_io_types": { 00:20:00.303 "read": true, 00:20:00.303 "write": true, 00:20:00.303 "unmap": true, 00:20:00.303 "flush": true, 00:20:00.303 "reset": true, 00:20:00.303 "nvme_admin": false, 00:20:00.303 "nvme_io": false, 00:20:00.303 "nvme_io_md": false, 00:20:00.303 "write_zeroes": true, 00:20:00.303 "zcopy": true, 00:20:00.303 "get_zone_info": false, 00:20:00.303 "zone_management": false, 00:20:00.303 "zone_append": false, 00:20:00.303 "compare": false, 00:20:00.303 "compare_and_write": false, 00:20:00.303 "abort": true, 00:20:00.303 "seek_hole": false, 00:20:00.303 "seek_data": false, 00:20:00.303 "copy": true, 00:20:00.303 "nvme_iov_md": false 00:20:00.303 }, 00:20:00.303 "memory_domains": [ 00:20:00.303 { 00:20:00.303 "dma_device_id": "system", 00:20:00.303 "dma_device_type": 1 00:20:00.303 }, 00:20:00.303 { 00:20:00.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.303 "dma_device_type": 2 00:20:00.303 } 00:20:00.303 ], 00:20:00.303 "driver_specific": {} 00:20:00.303 } 00:20:00.303 ] 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.303 "name": "Existed_Raid", 00:20:00.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.303 "strip_size_kb": 64, 00:20:00.303 "state": "configuring", 00:20:00.303 "raid_level": "concat", 00:20:00.303 "superblock": false, 00:20:00.303 "num_base_bdevs": 3, 00:20:00.303 "num_base_bdevs_discovered": 1, 00:20:00.303 "num_base_bdevs_operational": 3, 00:20:00.303 "base_bdevs_list": [ 00:20:00.303 { 00:20:00.303 "name": "BaseBdev1", 00:20:00.303 "uuid": "f0f95a8d-e9b8-4e9f-8543-22bc71b07146", 00:20:00.303 "is_configured": true, 00:20:00.303 "data_offset": 0, 00:20:00.303 "data_size": 65536 00:20:00.303 }, 00:20:00.303 { 00:20:00.303 "name": "BaseBdev2", 00:20:00.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.303 "is_configured": false, 00:20:00.303 "data_offset": 0, 00:20:00.303 "data_size": 0 00:20:00.303 }, 00:20:00.303 { 00:20:00.303 "name": "BaseBdev3", 00:20:00.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.303 "is_configured": false, 00:20:00.303 "data_offset": 0, 00:20:00.303 "data_size": 0 00:20:00.303 } 00:20:00.303 ] 00:20:00.303 }' 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.303 17:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.875 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:00.875 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.876 [2024-11-08 17:07:37.282817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.876 [2024-11-08 17:07:37.283078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.876 [2024-11-08 17:07:37.290886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.876 [2024-11-08 17:07:37.293488] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.876 [2024-11-08 17:07:37.293698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.876 [2024-11-08 17:07:37.293744] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:00.876 [2024-11-08 17:07:37.293773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.876 "name": "Existed_Raid", 00:20:00.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.876 "strip_size_kb": 64, 00:20:00.876 "state": "configuring", 00:20:00.876 "raid_level": "concat", 00:20:00.876 "superblock": false, 00:20:00.876 "num_base_bdevs": 3, 00:20:00.876 "num_base_bdevs_discovered": 1, 00:20:00.876 "num_base_bdevs_operational": 3, 00:20:00.876 "base_bdevs_list": [ 00:20:00.876 { 00:20:00.876 "name": "BaseBdev1", 00:20:00.876 "uuid": "f0f95a8d-e9b8-4e9f-8543-22bc71b07146", 00:20:00.876 "is_configured": true, 00:20:00.876 "data_offset": 0, 00:20:00.876 "data_size": 65536 00:20:00.876 }, 00:20:00.876 { 00:20:00.876 "name": "BaseBdev2", 00:20:00.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.876 "is_configured": false, 00:20:00.876 "data_offset": 0, 00:20:00.876 "data_size": 0 00:20:00.876 }, 00:20:00.876 { 00:20:00.876 "name": "BaseBdev3", 00:20:00.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.876 "is_configured": false, 00:20:00.876 "data_offset": 0, 00:20:00.876 "data_size": 0 00:20:00.876 } 00:20:00.876 ] 00:20:00.876 }' 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.876 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.137 BaseBdev2 00:20:01.137 [2024-11-08 17:07:37.670964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.137 [ 00:20:01.137 { 00:20:01.137 "name": "BaseBdev2", 00:20:01.137 "aliases": [ 00:20:01.137 "3ffc62e8-178b-4cba-bf7b-ee2077008867" 00:20:01.137 ], 00:20:01.137 "product_name": "Malloc disk", 00:20:01.137 "block_size": 512, 00:20:01.137 "num_blocks": 65536, 00:20:01.137 "uuid": "3ffc62e8-178b-4cba-bf7b-ee2077008867", 00:20:01.137 "assigned_rate_limits": { 00:20:01.137 "rw_ios_per_sec": 0, 00:20:01.137 "rw_mbytes_per_sec": 0, 00:20:01.137 "r_mbytes_per_sec": 0, 00:20:01.137 "w_mbytes_per_sec": 0 00:20:01.137 }, 00:20:01.137 "claimed": true, 00:20:01.137 "claim_type": "exclusive_write", 00:20:01.137 "zoned": false, 00:20:01.137 "supported_io_types": { 00:20:01.137 "read": true, 00:20:01.137 "write": true, 00:20:01.137 "unmap": true, 00:20:01.137 "flush": true, 00:20:01.137 "reset": true, 00:20:01.137 "nvme_admin": false, 00:20:01.137 "nvme_io": false, 00:20:01.137 "nvme_io_md": false, 00:20:01.137 "write_zeroes": true, 00:20:01.137 "zcopy": true, 00:20:01.137 "get_zone_info": false, 00:20:01.137 "zone_management": false, 00:20:01.137 "zone_append": false, 00:20:01.137 "compare": false, 00:20:01.137 "compare_and_write": false, 00:20:01.137 "abort": true, 00:20:01.137 "seek_hole": false, 00:20:01.137 "seek_data": false, 00:20:01.137 "copy": true, 00:20:01.137 "nvme_iov_md": false 00:20:01.137 }, 00:20:01.137 "memory_domains": [ 00:20:01.137 { 00:20:01.137 "dma_device_id": "system", 00:20:01.137 "dma_device_type": 1 00:20:01.137 }, 00:20:01.137 { 00:20:01.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.137 "dma_device_type": 2 00:20:01.137 } 00:20:01.137 ], 00:20:01.137 "driver_specific": {} 00:20:01.137 } 00:20:01.137 ] 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.137 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.137 "name": "Existed_Raid", 00:20:01.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.137 "strip_size_kb": 64, 00:20:01.137 "state": "configuring", 00:20:01.137 "raid_level": "concat", 00:20:01.137 "superblock": false, 00:20:01.137 "num_base_bdevs": 3, 00:20:01.137 "num_base_bdevs_discovered": 2, 00:20:01.137 "num_base_bdevs_operational": 3, 00:20:01.137 "base_bdevs_list": [ 00:20:01.137 { 00:20:01.137 "name": "BaseBdev1", 00:20:01.137 "uuid": "f0f95a8d-e9b8-4e9f-8543-22bc71b07146", 00:20:01.138 "is_configured": true, 00:20:01.138 "data_offset": 0, 00:20:01.138 "data_size": 65536 00:20:01.138 }, 00:20:01.138 { 00:20:01.138 "name": "BaseBdev2", 00:20:01.138 "uuid": "3ffc62e8-178b-4cba-bf7b-ee2077008867", 00:20:01.138 "is_configured": true, 00:20:01.138 "data_offset": 0, 00:20:01.138 "data_size": 65536 00:20:01.138 }, 00:20:01.138 { 00:20:01.138 "name": "BaseBdev3", 00:20:01.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.138 "is_configured": false, 00:20:01.138 "data_offset": 0, 00:20:01.138 "data_size": 0 00:20:01.138 } 00:20:01.138 ] 00:20:01.138 }' 00:20:01.138 17:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.138 17:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.411 [2024-11-08 17:07:38.081486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:01.411 [2024-11-08 17:07:38.081583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:01.411 [2024-11-08 17:07:38.081601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:01.411 [2024-11-08 17:07:38.082052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:01.411 [2024-11-08 17:07:38.082270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:01.411 [2024-11-08 17:07:38.082281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:01.411 [2024-11-08 17:07:38.082657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.411 BaseBdev3 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.411 [ 00:20:01.411 { 00:20:01.411 "name": "BaseBdev3", 00:20:01.411 "aliases": [ 00:20:01.411 "d2f830f9-6eee-4f7e-98e0-33160ecddb95" 00:20:01.411 ], 00:20:01.411 "product_name": "Malloc disk", 00:20:01.411 "block_size": 512, 00:20:01.411 "num_blocks": 65536, 00:20:01.411 "uuid": "d2f830f9-6eee-4f7e-98e0-33160ecddb95", 00:20:01.411 "assigned_rate_limits": { 00:20:01.411 "rw_ios_per_sec": 0, 00:20:01.411 "rw_mbytes_per_sec": 0, 00:20:01.411 "r_mbytes_per_sec": 0, 00:20:01.411 "w_mbytes_per_sec": 0 00:20:01.411 }, 00:20:01.411 "claimed": true, 00:20:01.411 "claim_type": "exclusive_write", 00:20:01.411 "zoned": false, 00:20:01.411 "supported_io_types": { 00:20:01.411 "read": true, 00:20:01.411 "write": true, 00:20:01.411 "unmap": true, 00:20:01.411 "flush": true, 00:20:01.411 "reset": true, 00:20:01.411 "nvme_admin": false, 00:20:01.411 "nvme_io": false, 00:20:01.411 "nvme_io_md": false, 00:20:01.411 "write_zeroes": true, 00:20:01.411 "zcopy": true, 00:20:01.411 "get_zone_info": false, 00:20:01.411 "zone_management": false, 00:20:01.411 "zone_append": false, 00:20:01.411 "compare": false, 00:20:01.411 "compare_and_write": false, 00:20:01.411 "abort": true, 00:20:01.411 "seek_hole": false, 00:20:01.411 "seek_data": false, 00:20:01.411 "copy": true, 00:20:01.411 "nvme_iov_md": false 00:20:01.411 }, 00:20:01.411 "memory_domains": [ 00:20:01.411 { 00:20:01.411 "dma_device_id": "system", 00:20:01.411 "dma_device_type": 1 00:20:01.411 }, 00:20:01.411 { 00:20:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.411 "dma_device_type": 2 00:20:01.411 } 00:20:01.411 ], 00:20:01.411 "driver_specific": {} 00:20:01.411 } 00:20:01.411 ] 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.411 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.695 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.695 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.695 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.695 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.695 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.695 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.695 "name": "Existed_Raid", 00:20:01.695 "uuid": "5cceee47-66e4-4a63-b119-5c5f88170fe5", 00:20:01.695 "strip_size_kb": 64, 00:20:01.695 "state": "online", 00:20:01.695 "raid_level": "concat", 00:20:01.695 "superblock": false, 00:20:01.695 "num_base_bdevs": 3, 00:20:01.695 "num_base_bdevs_discovered": 3, 00:20:01.695 "num_base_bdevs_operational": 3, 00:20:01.695 "base_bdevs_list": [ 00:20:01.695 { 00:20:01.695 "name": "BaseBdev1", 00:20:01.695 "uuid": "f0f95a8d-e9b8-4e9f-8543-22bc71b07146", 00:20:01.695 "is_configured": true, 00:20:01.695 "data_offset": 0, 00:20:01.695 "data_size": 65536 00:20:01.695 }, 00:20:01.695 { 00:20:01.695 "name": "BaseBdev2", 00:20:01.695 "uuid": "3ffc62e8-178b-4cba-bf7b-ee2077008867", 00:20:01.695 "is_configured": true, 00:20:01.695 "data_offset": 0, 00:20:01.695 "data_size": 65536 00:20:01.695 }, 00:20:01.695 { 00:20:01.695 "name": "BaseBdev3", 00:20:01.695 "uuid": "d2f830f9-6eee-4f7e-98e0-33160ecddb95", 00:20:01.696 "is_configured": true, 00:20:01.696 "data_offset": 0, 00:20:01.696 "data_size": 65536 00:20:01.696 } 00:20:01.696 ] 00:20:01.696 }' 00:20:01.696 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.696 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:01.956 [2024-11-08 17:07:38.462084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.956 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.956 "name": "Existed_Raid", 00:20:01.956 "aliases": [ 00:20:01.956 "5cceee47-66e4-4a63-b119-5c5f88170fe5" 00:20:01.956 ], 00:20:01.956 "product_name": "Raid Volume", 00:20:01.956 "block_size": 512, 00:20:01.956 "num_blocks": 196608, 00:20:01.956 "uuid": "5cceee47-66e4-4a63-b119-5c5f88170fe5", 00:20:01.956 "assigned_rate_limits": { 00:20:01.956 "rw_ios_per_sec": 0, 00:20:01.956 "rw_mbytes_per_sec": 0, 00:20:01.956 "r_mbytes_per_sec": 0, 00:20:01.956 "w_mbytes_per_sec": 0 00:20:01.956 }, 00:20:01.956 "claimed": false, 00:20:01.956 "zoned": false, 00:20:01.956 "supported_io_types": { 00:20:01.956 "read": true, 00:20:01.956 "write": true, 00:20:01.956 "unmap": true, 00:20:01.956 "flush": true, 00:20:01.956 "reset": true, 00:20:01.956 "nvme_admin": false, 00:20:01.956 "nvme_io": false, 00:20:01.956 "nvme_io_md": false, 00:20:01.956 "write_zeroes": true, 00:20:01.956 "zcopy": false, 00:20:01.956 "get_zone_info": false, 00:20:01.956 "zone_management": false, 00:20:01.956 "zone_append": false, 00:20:01.956 "compare": false, 00:20:01.956 "compare_and_write": false, 00:20:01.956 "abort": false, 00:20:01.956 "seek_hole": false, 00:20:01.956 "seek_data": false, 00:20:01.956 "copy": false, 00:20:01.956 "nvme_iov_md": false 00:20:01.956 }, 00:20:01.956 "memory_domains": [ 00:20:01.956 { 00:20:01.956 "dma_device_id": "system", 00:20:01.956 "dma_device_type": 1 00:20:01.956 }, 00:20:01.956 { 00:20:01.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.956 "dma_device_type": 2 00:20:01.956 }, 00:20:01.956 { 00:20:01.956 "dma_device_id": "system", 00:20:01.956 "dma_device_type": 1 00:20:01.956 }, 00:20:01.956 { 00:20:01.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.957 "dma_device_type": 2 00:20:01.957 }, 00:20:01.957 { 00:20:01.957 "dma_device_id": "system", 00:20:01.957 "dma_device_type": 1 00:20:01.957 }, 00:20:01.957 { 00:20:01.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.957 "dma_device_type": 2 00:20:01.957 } 00:20:01.957 ], 00:20:01.957 "driver_specific": { 00:20:01.957 "raid": { 00:20:01.957 "uuid": "5cceee47-66e4-4a63-b119-5c5f88170fe5", 00:20:01.957 "strip_size_kb": 64, 00:20:01.957 "state": "online", 00:20:01.957 "raid_level": "concat", 00:20:01.957 "superblock": false, 00:20:01.957 "num_base_bdevs": 3, 00:20:01.957 "num_base_bdevs_discovered": 3, 00:20:01.957 "num_base_bdevs_operational": 3, 00:20:01.957 "base_bdevs_list": [ 00:20:01.957 { 00:20:01.957 "name": "BaseBdev1", 00:20:01.957 "uuid": "f0f95a8d-e9b8-4e9f-8543-22bc71b07146", 00:20:01.957 "is_configured": true, 00:20:01.957 "data_offset": 0, 00:20:01.957 "data_size": 65536 00:20:01.957 }, 00:20:01.957 { 00:20:01.957 "name": "BaseBdev2", 00:20:01.957 "uuid": "3ffc62e8-178b-4cba-bf7b-ee2077008867", 00:20:01.957 "is_configured": true, 00:20:01.957 "data_offset": 0, 00:20:01.957 "data_size": 65536 00:20:01.957 }, 00:20:01.957 { 00:20:01.957 "name": "BaseBdev3", 00:20:01.957 "uuid": "d2f830f9-6eee-4f7e-98e0-33160ecddb95", 00:20:01.957 "is_configured": true, 00:20:01.957 "data_offset": 0, 00:20:01.957 "data_size": 65536 00:20:01.957 } 00:20:01.957 ] 00:20:01.957 } 00:20:01.957 } 00:20:01.957 }' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:01.957 BaseBdev2 00:20:01.957 BaseBdev3' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 [2024-11-08 17:07:38.661826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:01.957 [2024-11-08 17:07:38.662038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.957 [2024-11-08 17:07:38.662149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.217 "name": "Existed_Raid", 00:20:02.217 "uuid": "5cceee47-66e4-4a63-b119-5c5f88170fe5", 00:20:02.217 "strip_size_kb": 64, 00:20:02.217 "state": "offline", 00:20:02.217 "raid_level": "concat", 00:20:02.217 "superblock": false, 00:20:02.217 "num_base_bdevs": 3, 00:20:02.217 "num_base_bdevs_discovered": 2, 00:20:02.217 "num_base_bdevs_operational": 2, 00:20:02.217 "base_bdevs_list": [ 00:20:02.217 { 00:20:02.217 "name": null, 00:20:02.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.217 "is_configured": false, 00:20:02.217 "data_offset": 0, 00:20:02.217 "data_size": 65536 00:20:02.217 }, 00:20:02.217 { 00:20:02.217 "name": "BaseBdev2", 00:20:02.217 "uuid": "3ffc62e8-178b-4cba-bf7b-ee2077008867", 00:20:02.217 "is_configured": true, 00:20:02.217 "data_offset": 0, 00:20:02.217 "data_size": 65536 00:20:02.217 }, 00:20:02.217 { 00:20:02.217 "name": "BaseBdev3", 00:20:02.217 "uuid": "d2f830f9-6eee-4f7e-98e0-33160ecddb95", 00:20:02.217 "is_configured": true, 00:20:02.217 "data_offset": 0, 00:20:02.217 "data_size": 65536 00:20:02.217 } 00:20:02.217 ] 00:20:02.217 }' 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.217 17:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.478 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.478 [2024-11-08 17:07:39.166737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 [2024-11-08 17:07:39.282745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:02.738 [2024-11-08 17:07:39.283040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 BaseBdev2 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.738 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.997 [ 00:20:02.997 { 00:20:02.997 "name": "BaseBdev2", 00:20:02.997 "aliases": [ 00:20:02.997 "48c51ec0-54b9-4b99-b60c-03eee6a5d317" 00:20:02.997 ], 00:20:02.997 "product_name": "Malloc disk", 00:20:02.997 "block_size": 512, 00:20:02.997 "num_blocks": 65536, 00:20:02.997 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:02.997 "assigned_rate_limits": { 00:20:02.997 "rw_ios_per_sec": 0, 00:20:02.997 "rw_mbytes_per_sec": 0, 00:20:02.997 "r_mbytes_per_sec": 0, 00:20:02.997 "w_mbytes_per_sec": 0 00:20:02.997 }, 00:20:02.997 "claimed": false, 00:20:02.997 "zoned": false, 00:20:02.997 "supported_io_types": { 00:20:02.997 "read": true, 00:20:02.997 "write": true, 00:20:02.997 "unmap": true, 00:20:02.997 "flush": true, 00:20:02.997 "reset": true, 00:20:02.997 "nvme_admin": false, 00:20:02.997 "nvme_io": false, 00:20:02.997 "nvme_io_md": false, 00:20:02.997 "write_zeroes": true, 00:20:02.997 "zcopy": true, 00:20:02.997 "get_zone_info": false, 00:20:02.997 "zone_management": false, 00:20:02.997 "zone_append": false, 00:20:02.997 "compare": false, 00:20:02.997 "compare_and_write": false, 00:20:02.997 "abort": true, 00:20:02.997 "seek_hole": false, 00:20:02.997 "seek_data": false, 00:20:02.997 "copy": true, 00:20:02.997 "nvme_iov_md": false 00:20:02.997 }, 00:20:02.997 "memory_domains": [ 00:20:02.997 { 00:20:02.997 "dma_device_id": "system", 00:20:02.997 "dma_device_type": 1 00:20:02.997 }, 00:20:02.997 { 00:20:02.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.997 "dma_device_type": 2 00:20:02.997 } 00:20:02.997 ], 00:20:02.997 "driver_specific": {} 00:20:02.997 } 00:20:02.997 ] 00:20:02.997 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.997 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:02.997 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:02.997 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.997 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:02.997 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.998 BaseBdev3 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.998 [ 00:20:02.998 { 00:20:02.998 "name": "BaseBdev3", 00:20:02.998 "aliases": [ 00:20:02.998 "d0542e5f-89d3-4cce-96a8-bebf2ff3f122" 00:20:02.998 ], 00:20:02.998 "product_name": "Malloc disk", 00:20:02.998 "block_size": 512, 00:20:02.998 "num_blocks": 65536, 00:20:02.998 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:02.998 "assigned_rate_limits": { 00:20:02.998 "rw_ios_per_sec": 0, 00:20:02.998 "rw_mbytes_per_sec": 0, 00:20:02.998 "r_mbytes_per_sec": 0, 00:20:02.998 "w_mbytes_per_sec": 0 00:20:02.998 }, 00:20:02.998 "claimed": false, 00:20:02.998 "zoned": false, 00:20:02.998 "supported_io_types": { 00:20:02.998 "read": true, 00:20:02.998 "write": true, 00:20:02.998 "unmap": true, 00:20:02.998 "flush": true, 00:20:02.998 "reset": true, 00:20:02.998 "nvme_admin": false, 00:20:02.998 "nvme_io": false, 00:20:02.998 "nvme_io_md": false, 00:20:02.998 "write_zeroes": true, 00:20:02.998 "zcopy": true, 00:20:02.998 "get_zone_info": false, 00:20:02.998 "zone_management": false, 00:20:02.998 "zone_append": false, 00:20:02.998 "compare": false, 00:20:02.998 "compare_and_write": false, 00:20:02.998 "abort": true, 00:20:02.998 "seek_hole": false, 00:20:02.998 "seek_data": false, 00:20:02.998 "copy": true, 00:20:02.998 "nvme_iov_md": false 00:20:02.998 }, 00:20:02.998 "memory_domains": [ 00:20:02.998 { 00:20:02.998 "dma_device_id": "system", 00:20:02.998 "dma_device_type": 1 00:20:02.998 }, 00:20:02.998 { 00:20:02.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.998 "dma_device_type": 2 00:20:02.998 } 00:20:02.998 ], 00:20:02.998 "driver_specific": {} 00:20:02.998 } 00:20:02.998 ] 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.998 [2024-11-08 17:07:39.532562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.998 [2024-11-08 17:07:39.532822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.998 [2024-11-08 17:07:39.532930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.998 [2024-11-08 17:07:39.535472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.998 "name": "Existed_Raid", 00:20:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.998 "strip_size_kb": 64, 00:20:02.998 "state": "configuring", 00:20:02.998 "raid_level": "concat", 00:20:02.998 "superblock": false, 00:20:02.998 "num_base_bdevs": 3, 00:20:02.998 "num_base_bdevs_discovered": 2, 00:20:02.998 "num_base_bdevs_operational": 3, 00:20:02.998 "base_bdevs_list": [ 00:20:02.998 { 00:20:02.998 "name": "BaseBdev1", 00:20:02.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.998 "is_configured": false, 00:20:02.998 "data_offset": 0, 00:20:02.998 "data_size": 0 00:20:02.998 }, 00:20:02.998 { 00:20:02.998 "name": "BaseBdev2", 00:20:02.998 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:02.998 "is_configured": true, 00:20:02.998 "data_offset": 0, 00:20:02.998 "data_size": 65536 00:20:02.998 }, 00:20:02.998 { 00:20:02.998 "name": "BaseBdev3", 00:20:02.998 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:02.998 "is_configured": true, 00:20:02.998 "data_offset": 0, 00:20:02.998 "data_size": 65536 00:20:02.998 } 00:20:02.998 ] 00:20:02.998 }' 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.998 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.258 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:03.258 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.259 [2024-11-08 17:07:39.872660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.259 "name": "Existed_Raid", 00:20:03.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.259 "strip_size_kb": 64, 00:20:03.259 "state": "configuring", 00:20:03.259 "raid_level": "concat", 00:20:03.259 "superblock": false, 00:20:03.259 "num_base_bdevs": 3, 00:20:03.259 "num_base_bdevs_discovered": 1, 00:20:03.259 "num_base_bdevs_operational": 3, 00:20:03.259 "base_bdevs_list": [ 00:20:03.259 { 00:20:03.259 "name": "BaseBdev1", 00:20:03.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.259 "is_configured": false, 00:20:03.259 "data_offset": 0, 00:20:03.259 "data_size": 0 00:20:03.259 }, 00:20:03.259 { 00:20:03.259 "name": null, 00:20:03.259 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:03.259 "is_configured": false, 00:20:03.259 "data_offset": 0, 00:20:03.259 "data_size": 65536 00:20:03.259 }, 00:20:03.259 { 00:20:03.259 "name": "BaseBdev3", 00:20:03.259 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:03.259 "is_configured": true, 00:20:03.259 "data_offset": 0, 00:20:03.259 "data_size": 65536 00:20:03.259 } 00:20:03.259 ] 00:20:03.259 }' 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.259 17:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.519 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.519 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.519 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.520 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.785 BaseBdev1 00:20:03.785 [2024-11-08 17:07:40.300002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.785 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.785 [ 00:20:03.785 { 00:20:03.785 "name": "BaseBdev1", 00:20:03.785 "aliases": [ 00:20:03.785 "f33df7cd-3bb3-48f0-8040-93b836438cda" 00:20:03.785 ], 00:20:03.785 "product_name": "Malloc disk", 00:20:03.785 "block_size": 512, 00:20:03.785 "num_blocks": 65536, 00:20:03.785 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:03.785 "assigned_rate_limits": { 00:20:03.785 "rw_ios_per_sec": 0, 00:20:03.785 "rw_mbytes_per_sec": 0, 00:20:03.785 "r_mbytes_per_sec": 0, 00:20:03.785 "w_mbytes_per_sec": 0 00:20:03.785 }, 00:20:03.785 "claimed": true, 00:20:03.785 "claim_type": "exclusive_write", 00:20:03.785 "zoned": false, 00:20:03.785 "supported_io_types": { 00:20:03.785 "read": true, 00:20:03.785 "write": true, 00:20:03.785 "unmap": true, 00:20:03.785 "flush": true, 00:20:03.785 "reset": true, 00:20:03.785 "nvme_admin": false, 00:20:03.785 "nvme_io": false, 00:20:03.785 "nvme_io_md": false, 00:20:03.785 "write_zeroes": true, 00:20:03.786 "zcopy": true, 00:20:03.786 "get_zone_info": false, 00:20:03.786 "zone_management": false, 00:20:03.786 "zone_append": false, 00:20:03.786 "compare": false, 00:20:03.786 "compare_and_write": false, 00:20:03.786 "abort": true, 00:20:03.786 "seek_hole": false, 00:20:03.786 "seek_data": false, 00:20:03.786 "copy": true, 00:20:03.786 "nvme_iov_md": false 00:20:03.786 }, 00:20:03.786 "memory_domains": [ 00:20:03.786 { 00:20:03.786 "dma_device_id": "system", 00:20:03.786 "dma_device_type": 1 00:20:03.786 }, 00:20:03.786 { 00:20:03.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.786 "dma_device_type": 2 00:20:03.786 } 00:20:03.786 ], 00:20:03.786 "driver_specific": {} 00:20:03.786 } 00:20:03.786 ] 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.786 "name": "Existed_Raid", 00:20:03.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.786 "strip_size_kb": 64, 00:20:03.786 "state": "configuring", 00:20:03.786 "raid_level": "concat", 00:20:03.786 "superblock": false, 00:20:03.786 "num_base_bdevs": 3, 00:20:03.786 "num_base_bdevs_discovered": 2, 00:20:03.786 "num_base_bdevs_operational": 3, 00:20:03.786 "base_bdevs_list": [ 00:20:03.786 { 00:20:03.786 "name": "BaseBdev1", 00:20:03.786 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:03.786 "is_configured": true, 00:20:03.786 "data_offset": 0, 00:20:03.786 "data_size": 65536 00:20:03.786 }, 00:20:03.786 { 00:20:03.786 "name": null, 00:20:03.786 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:03.786 "is_configured": false, 00:20:03.786 "data_offset": 0, 00:20:03.786 "data_size": 65536 00:20:03.786 }, 00:20:03.786 { 00:20:03.786 "name": "BaseBdev3", 00:20:03.786 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:03.786 "is_configured": true, 00:20:03.786 "data_offset": 0, 00:20:03.786 "data_size": 65536 00:20:03.786 } 00:20:03.786 ] 00:20:03.786 }' 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.786 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.047 [2024-11-08 17:07:40.708189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.047 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.048 "name": "Existed_Raid", 00:20:04.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.048 "strip_size_kb": 64, 00:20:04.048 "state": "configuring", 00:20:04.048 "raid_level": "concat", 00:20:04.048 "superblock": false, 00:20:04.048 "num_base_bdevs": 3, 00:20:04.048 "num_base_bdevs_discovered": 1, 00:20:04.048 "num_base_bdevs_operational": 3, 00:20:04.048 "base_bdevs_list": [ 00:20:04.048 { 00:20:04.048 "name": "BaseBdev1", 00:20:04.048 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:04.048 "is_configured": true, 00:20:04.048 "data_offset": 0, 00:20:04.048 "data_size": 65536 00:20:04.048 }, 00:20:04.048 { 00:20:04.048 "name": null, 00:20:04.048 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:04.048 "is_configured": false, 00:20:04.048 "data_offset": 0, 00:20:04.048 "data_size": 65536 00:20:04.048 }, 00:20:04.048 { 00:20:04.048 "name": null, 00:20:04.048 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:04.048 "is_configured": false, 00:20:04.048 "data_offset": 0, 00:20:04.048 "data_size": 65536 00:20:04.048 } 00:20:04.048 ] 00:20:04.048 }' 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.048 17:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.615 [2024-11-08 17:07:41.116363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.615 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.616 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.616 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.616 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.616 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.616 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.616 "name": "Existed_Raid", 00:20:04.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.616 "strip_size_kb": 64, 00:20:04.616 "state": "configuring", 00:20:04.616 "raid_level": "concat", 00:20:04.616 "superblock": false, 00:20:04.616 "num_base_bdevs": 3, 00:20:04.616 "num_base_bdevs_discovered": 2, 00:20:04.616 "num_base_bdevs_operational": 3, 00:20:04.616 "base_bdevs_list": [ 00:20:04.616 { 00:20:04.616 "name": "BaseBdev1", 00:20:04.616 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:04.616 "is_configured": true, 00:20:04.616 "data_offset": 0, 00:20:04.616 "data_size": 65536 00:20:04.616 }, 00:20:04.616 { 00:20:04.616 "name": null, 00:20:04.616 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:04.616 "is_configured": false, 00:20:04.616 "data_offset": 0, 00:20:04.616 "data_size": 65536 00:20:04.616 }, 00:20:04.616 { 00:20:04.616 "name": "BaseBdev3", 00:20:04.616 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:04.616 "is_configured": true, 00:20:04.616 "data_offset": 0, 00:20:04.616 "data_size": 65536 00:20:04.616 } 00:20:04.616 ] 00:20:04.616 }' 00:20:04.616 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.616 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.873 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:04.873 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.874 [2024-11-08 17:07:41.480362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.874 "name": "Existed_Raid", 00:20:04.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.874 "strip_size_kb": 64, 00:20:04.874 "state": "configuring", 00:20:04.874 "raid_level": "concat", 00:20:04.874 "superblock": false, 00:20:04.874 "num_base_bdevs": 3, 00:20:04.874 "num_base_bdevs_discovered": 1, 00:20:04.874 "num_base_bdevs_operational": 3, 00:20:04.874 "base_bdevs_list": [ 00:20:04.874 { 00:20:04.874 "name": null, 00:20:04.874 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:04.874 "is_configured": false, 00:20:04.874 "data_offset": 0, 00:20:04.874 "data_size": 65536 00:20:04.874 }, 00:20:04.874 { 00:20:04.874 "name": null, 00:20:04.874 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:04.874 "is_configured": false, 00:20:04.874 "data_offset": 0, 00:20:04.874 "data_size": 65536 00:20:04.874 }, 00:20:04.874 { 00:20:04.874 "name": "BaseBdev3", 00:20:04.874 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:04.874 "is_configured": true, 00:20:04.874 "data_offset": 0, 00:20:04.874 "data_size": 65536 00:20:04.874 } 00:20:04.874 ] 00:20:04.874 }' 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.874 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.441 [2024-11-08 17:07:41.911960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.441 "name": "Existed_Raid", 00:20:05.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.441 "strip_size_kb": 64, 00:20:05.441 "state": "configuring", 00:20:05.441 "raid_level": "concat", 00:20:05.441 "superblock": false, 00:20:05.441 "num_base_bdevs": 3, 00:20:05.441 "num_base_bdevs_discovered": 2, 00:20:05.441 "num_base_bdevs_operational": 3, 00:20:05.441 "base_bdevs_list": [ 00:20:05.441 { 00:20:05.441 "name": null, 00:20:05.441 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:05.441 "is_configured": false, 00:20:05.441 "data_offset": 0, 00:20:05.441 "data_size": 65536 00:20:05.441 }, 00:20:05.441 { 00:20:05.441 "name": "BaseBdev2", 00:20:05.441 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:05.441 "is_configured": true, 00:20:05.441 "data_offset": 0, 00:20:05.441 "data_size": 65536 00:20:05.441 }, 00:20:05.441 { 00:20:05.441 "name": "BaseBdev3", 00:20:05.441 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:05.441 "is_configured": true, 00:20:05.441 "data_offset": 0, 00:20:05.441 "data_size": 65536 00:20:05.441 } 00:20:05.441 ] 00:20:05.441 }' 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.441 17:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f33df7cd-3bb3-48f0-8040-93b836438cda 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 [2024-11-08 17:07:42.325622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:05.704 NewBaseBdev 00:20:05.704 [2024-11-08 17:07:42.325912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:05.704 [2024-11-08 17:07:42.325934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:05.704 [2024-11-08 17:07:42.326222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:05.704 [2024-11-08 17:07:42.326377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:05.704 [2024-11-08 17:07:42.326386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:05.704 [2024-11-08 17:07:42.326648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.704 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 [ 00:20:05.704 { 00:20:05.704 "name": "NewBaseBdev", 00:20:05.704 "aliases": [ 00:20:05.704 "f33df7cd-3bb3-48f0-8040-93b836438cda" 00:20:05.704 ], 00:20:05.704 "product_name": "Malloc disk", 00:20:05.704 "block_size": 512, 00:20:05.704 "num_blocks": 65536, 00:20:05.704 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:05.704 "assigned_rate_limits": { 00:20:05.704 "rw_ios_per_sec": 0, 00:20:05.704 "rw_mbytes_per_sec": 0, 00:20:05.704 "r_mbytes_per_sec": 0, 00:20:05.704 "w_mbytes_per_sec": 0 00:20:05.704 }, 00:20:05.704 "claimed": true, 00:20:05.704 "claim_type": "exclusive_write", 00:20:05.704 "zoned": false, 00:20:05.704 "supported_io_types": { 00:20:05.704 "read": true, 00:20:05.704 "write": true, 00:20:05.704 "unmap": true, 00:20:05.704 "flush": true, 00:20:05.704 "reset": true, 00:20:05.704 "nvme_admin": false, 00:20:05.704 "nvme_io": false, 00:20:05.704 "nvme_io_md": false, 00:20:05.704 "write_zeroes": true, 00:20:05.704 "zcopy": true, 00:20:05.704 "get_zone_info": false, 00:20:05.704 "zone_management": false, 00:20:05.704 "zone_append": false, 00:20:05.704 "compare": false, 00:20:05.704 "compare_and_write": false, 00:20:05.704 "abort": true, 00:20:05.704 "seek_hole": false, 00:20:05.704 "seek_data": false, 00:20:05.704 "copy": true, 00:20:05.704 "nvme_iov_md": false 00:20:05.704 }, 00:20:05.704 "memory_domains": [ 00:20:05.704 { 00:20:05.704 "dma_device_id": "system", 00:20:05.704 "dma_device_type": 1 00:20:05.704 }, 00:20:05.704 { 00:20:05.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.705 "dma_device_type": 2 00:20:05.705 } 00:20:05.705 ], 00:20:05.705 "driver_specific": {} 00:20:05.705 } 00:20:05.705 ] 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.705 "name": "Existed_Raid", 00:20:05.705 "uuid": "9616a73c-0bb5-4877-94dc-6fa779b364bc", 00:20:05.705 "strip_size_kb": 64, 00:20:05.705 "state": "online", 00:20:05.705 "raid_level": "concat", 00:20:05.705 "superblock": false, 00:20:05.705 "num_base_bdevs": 3, 00:20:05.705 "num_base_bdevs_discovered": 3, 00:20:05.705 "num_base_bdevs_operational": 3, 00:20:05.705 "base_bdevs_list": [ 00:20:05.705 { 00:20:05.705 "name": "NewBaseBdev", 00:20:05.705 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:05.705 "is_configured": true, 00:20:05.705 "data_offset": 0, 00:20:05.705 "data_size": 65536 00:20:05.705 }, 00:20:05.705 { 00:20:05.705 "name": "BaseBdev2", 00:20:05.705 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:05.705 "is_configured": true, 00:20:05.705 "data_offset": 0, 00:20:05.705 "data_size": 65536 00:20:05.705 }, 00:20:05.705 { 00:20:05.705 "name": "BaseBdev3", 00:20:05.705 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:05.705 "is_configured": true, 00:20:05.705 "data_offset": 0, 00:20:05.705 "data_size": 65536 00:20:05.705 } 00:20:05.705 ] 00:20:05.705 }' 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.705 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.350 [2024-11-08 17:07:42.686180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.350 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.350 "name": "Existed_Raid", 00:20:06.350 "aliases": [ 00:20:06.350 "9616a73c-0bb5-4877-94dc-6fa779b364bc" 00:20:06.350 ], 00:20:06.350 "product_name": "Raid Volume", 00:20:06.350 "block_size": 512, 00:20:06.350 "num_blocks": 196608, 00:20:06.350 "uuid": "9616a73c-0bb5-4877-94dc-6fa779b364bc", 00:20:06.350 "assigned_rate_limits": { 00:20:06.350 "rw_ios_per_sec": 0, 00:20:06.350 "rw_mbytes_per_sec": 0, 00:20:06.350 "r_mbytes_per_sec": 0, 00:20:06.350 "w_mbytes_per_sec": 0 00:20:06.350 }, 00:20:06.350 "claimed": false, 00:20:06.350 "zoned": false, 00:20:06.350 "supported_io_types": { 00:20:06.350 "read": true, 00:20:06.350 "write": true, 00:20:06.350 "unmap": true, 00:20:06.350 "flush": true, 00:20:06.350 "reset": true, 00:20:06.350 "nvme_admin": false, 00:20:06.350 "nvme_io": false, 00:20:06.350 "nvme_io_md": false, 00:20:06.350 "write_zeroes": true, 00:20:06.350 "zcopy": false, 00:20:06.350 "get_zone_info": false, 00:20:06.350 "zone_management": false, 00:20:06.350 "zone_append": false, 00:20:06.350 "compare": false, 00:20:06.350 "compare_and_write": false, 00:20:06.350 "abort": false, 00:20:06.350 "seek_hole": false, 00:20:06.350 "seek_data": false, 00:20:06.350 "copy": false, 00:20:06.350 "nvme_iov_md": false 00:20:06.350 }, 00:20:06.350 "memory_domains": [ 00:20:06.350 { 00:20:06.350 "dma_device_id": "system", 00:20:06.351 "dma_device_type": 1 00:20:06.351 }, 00:20:06.351 { 00:20:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.351 "dma_device_type": 2 00:20:06.351 }, 00:20:06.351 { 00:20:06.351 "dma_device_id": "system", 00:20:06.351 "dma_device_type": 1 00:20:06.351 }, 00:20:06.351 { 00:20:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.351 "dma_device_type": 2 00:20:06.351 }, 00:20:06.351 { 00:20:06.351 "dma_device_id": "system", 00:20:06.351 "dma_device_type": 1 00:20:06.351 }, 00:20:06.351 { 00:20:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.351 "dma_device_type": 2 00:20:06.351 } 00:20:06.351 ], 00:20:06.351 "driver_specific": { 00:20:06.351 "raid": { 00:20:06.351 "uuid": "9616a73c-0bb5-4877-94dc-6fa779b364bc", 00:20:06.351 "strip_size_kb": 64, 00:20:06.351 "state": "online", 00:20:06.351 "raid_level": "concat", 00:20:06.351 "superblock": false, 00:20:06.351 "num_base_bdevs": 3, 00:20:06.351 "num_base_bdevs_discovered": 3, 00:20:06.351 "num_base_bdevs_operational": 3, 00:20:06.351 "base_bdevs_list": [ 00:20:06.351 { 00:20:06.351 "name": "NewBaseBdev", 00:20:06.351 "uuid": "f33df7cd-3bb3-48f0-8040-93b836438cda", 00:20:06.351 "is_configured": true, 00:20:06.351 "data_offset": 0, 00:20:06.351 "data_size": 65536 00:20:06.351 }, 00:20:06.351 { 00:20:06.351 "name": "BaseBdev2", 00:20:06.351 "uuid": "48c51ec0-54b9-4b99-b60c-03eee6a5d317", 00:20:06.351 "is_configured": true, 00:20:06.351 "data_offset": 0, 00:20:06.351 "data_size": 65536 00:20:06.351 }, 00:20:06.351 { 00:20:06.351 "name": "BaseBdev3", 00:20:06.351 "uuid": "d0542e5f-89d3-4cce-96a8-bebf2ff3f122", 00:20:06.351 "is_configured": true, 00:20:06.351 "data_offset": 0, 00:20:06.351 "data_size": 65536 00:20:06.351 } 00:20:06.351 ] 00:20:06.351 } 00:20:06.351 } 00:20:06.351 }' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:06.351 BaseBdev2 00:20:06.351 BaseBdev3' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.351 [2024-11-08 17:07:42.889867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:06.351 [2024-11-08 17:07:42.890019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.351 [2024-11-08 17:07:42.890182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.351 [2024-11-08 17:07:42.890264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.351 [2024-11-08 17:07:42.890278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64427 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 64427 ']' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 64427 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64427 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64427' 00:20:06.351 killing process with pid 64427 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 64427 00:20:06.351 [2024-11-08 17:07:42.928061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.351 17:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 64427 00:20:06.612 [2024-11-08 17:07:43.147902] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.550 17:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:07.550 00:20:07.550 real 0m8.501s 00:20:07.550 user 0m13.135s 00:20:07.550 sys 0m1.601s 00:20:07.550 ************************************ 00:20:07.550 END TEST raid_state_function_test 00:20:07.550 ************************************ 00:20:07.550 17:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:07.550 17:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.550 17:07:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:20:07.550 17:07:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:07.550 17:07:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:07.550 17:07:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.550 ************************************ 00:20:07.550 START TEST raid_state_function_test_sb 00:20:07.550 ************************************ 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:07.550 Process raid pid: 65032 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65032 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65032' 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65032 00:20:07.550 17:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 65032 ']' 00:20:07.551 17:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.551 17:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.551 17:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.551 17:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.551 17:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:07.551 17:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.551 [2024-11-08 17:07:44.131478] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:07.551 [2024-11-08 17:07:44.131830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.810 [2024-11-08 17:07:44.313816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.810 [2024-11-08 17:07:44.459259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.069 [2024-11-08 17:07:44.607832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.069 [2024-11-08 17:07:44.607882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.636 [2024-11-08 17:07:45.087624] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.636 [2024-11-08 17:07:45.087790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.636 [2024-11-08 17:07:45.087907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.636 [2024-11-08 17:07:45.087937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.636 [2024-11-08 17:07:45.087956] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:08.636 [2024-11-08 17:07:45.087977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.636 "name": "Existed_Raid", 00:20:08.636 "uuid": "d2966b07-3401-46bc-af8c-1e04121b0f59", 00:20:08.636 "strip_size_kb": 64, 00:20:08.636 "state": "configuring", 00:20:08.636 "raid_level": "concat", 00:20:08.636 "superblock": true, 00:20:08.636 "num_base_bdevs": 3, 00:20:08.636 "num_base_bdevs_discovered": 0, 00:20:08.636 "num_base_bdevs_operational": 3, 00:20:08.636 "base_bdevs_list": [ 00:20:08.636 { 00:20:08.636 "name": "BaseBdev1", 00:20:08.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.636 "is_configured": false, 00:20:08.636 "data_offset": 0, 00:20:08.636 "data_size": 0 00:20:08.636 }, 00:20:08.636 { 00:20:08.636 "name": "BaseBdev2", 00:20:08.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.636 "is_configured": false, 00:20:08.636 "data_offset": 0, 00:20:08.636 "data_size": 0 00:20:08.636 }, 00:20:08.636 { 00:20:08.636 "name": "BaseBdev3", 00:20:08.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.636 "is_configured": false, 00:20:08.636 "data_offset": 0, 00:20:08.636 "data_size": 0 00:20:08.636 } 00:20:08.636 ] 00:20:08.636 }' 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.636 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.905 [2024-11-08 17:07:45.443644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:08.905 [2024-11-08 17:07:45.443681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.905 [2024-11-08 17:07:45.451647] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.905 [2024-11-08 17:07:45.451790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.905 [2024-11-08 17:07:45.451853] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.905 [2024-11-08 17:07:45.451881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.905 [2024-11-08 17:07:45.451899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:08.905 [2024-11-08 17:07:45.451919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.905 [2024-11-08 17:07:45.486686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.905 BaseBdev1 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.905 [ 00:20:08.905 { 00:20:08.905 "name": "BaseBdev1", 00:20:08.905 "aliases": [ 00:20:08.905 "8c5b50ff-cfbe-4c6a-b768-18f673f5af45" 00:20:08.905 ], 00:20:08.905 "product_name": "Malloc disk", 00:20:08.905 "block_size": 512, 00:20:08.905 "num_blocks": 65536, 00:20:08.905 "uuid": "8c5b50ff-cfbe-4c6a-b768-18f673f5af45", 00:20:08.905 "assigned_rate_limits": { 00:20:08.905 "rw_ios_per_sec": 0, 00:20:08.905 "rw_mbytes_per_sec": 0, 00:20:08.905 "r_mbytes_per_sec": 0, 00:20:08.905 "w_mbytes_per_sec": 0 00:20:08.905 }, 00:20:08.905 "claimed": true, 00:20:08.905 "claim_type": "exclusive_write", 00:20:08.905 "zoned": false, 00:20:08.905 "supported_io_types": { 00:20:08.905 "read": true, 00:20:08.905 "write": true, 00:20:08.905 "unmap": true, 00:20:08.905 "flush": true, 00:20:08.905 "reset": true, 00:20:08.905 "nvme_admin": false, 00:20:08.905 "nvme_io": false, 00:20:08.905 "nvme_io_md": false, 00:20:08.905 "write_zeroes": true, 00:20:08.905 "zcopy": true, 00:20:08.905 "get_zone_info": false, 00:20:08.905 "zone_management": false, 00:20:08.905 "zone_append": false, 00:20:08.905 "compare": false, 00:20:08.905 "compare_and_write": false, 00:20:08.905 "abort": true, 00:20:08.905 "seek_hole": false, 00:20:08.905 "seek_data": false, 00:20:08.905 "copy": true, 00:20:08.905 "nvme_iov_md": false 00:20:08.905 }, 00:20:08.905 "memory_domains": [ 00:20:08.905 { 00:20:08.905 "dma_device_id": "system", 00:20:08.905 "dma_device_type": 1 00:20:08.905 }, 00:20:08.905 { 00:20:08.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.905 "dma_device_type": 2 00:20:08.905 } 00:20:08.905 ], 00:20:08.905 "driver_specific": {} 00:20:08.905 } 00:20:08.905 ] 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.905 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.906 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.906 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.906 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.906 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.906 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.906 "name": "Existed_Raid", 00:20:08.906 "uuid": "be707652-6848-4abb-95f7-c9a5cc00fd50", 00:20:08.906 "strip_size_kb": 64, 00:20:08.906 "state": "configuring", 00:20:08.906 "raid_level": "concat", 00:20:08.906 "superblock": true, 00:20:08.906 "num_base_bdevs": 3, 00:20:08.906 "num_base_bdevs_discovered": 1, 00:20:08.906 "num_base_bdevs_operational": 3, 00:20:08.906 "base_bdevs_list": [ 00:20:08.906 { 00:20:08.906 "name": "BaseBdev1", 00:20:08.906 "uuid": "8c5b50ff-cfbe-4c6a-b768-18f673f5af45", 00:20:08.906 "is_configured": true, 00:20:08.906 "data_offset": 2048, 00:20:08.906 "data_size": 63488 00:20:08.906 }, 00:20:08.906 { 00:20:08.906 "name": "BaseBdev2", 00:20:08.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.906 "is_configured": false, 00:20:08.906 "data_offset": 0, 00:20:08.906 "data_size": 0 00:20:08.906 }, 00:20:08.906 { 00:20:08.906 "name": "BaseBdev3", 00:20:08.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.906 "is_configured": false, 00:20:08.906 "data_offset": 0, 00:20:08.906 "data_size": 0 00:20:08.906 } 00:20:08.906 ] 00:20:08.906 }' 00:20:08.906 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.906 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.164 [2024-11-08 17:07:45.838839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.164 [2024-11-08 17:07:45.839002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.164 [2024-11-08 17:07:45.850918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.164 [2024-11-08 17:07:45.853033] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.164 [2024-11-08 17:07:45.853155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.164 [2024-11-08 17:07:45.853211] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.164 [2024-11-08 17:07:45.853238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.164 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.422 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.422 "name": "Existed_Raid", 00:20:09.422 "uuid": "75cf126c-5976-4a80-968d-374b1b7eb018", 00:20:09.422 "strip_size_kb": 64, 00:20:09.422 "state": "configuring", 00:20:09.422 "raid_level": "concat", 00:20:09.422 "superblock": true, 00:20:09.422 "num_base_bdevs": 3, 00:20:09.422 "num_base_bdevs_discovered": 1, 00:20:09.422 "num_base_bdevs_operational": 3, 00:20:09.422 "base_bdevs_list": [ 00:20:09.422 { 00:20:09.422 "name": "BaseBdev1", 00:20:09.422 "uuid": "8c5b50ff-cfbe-4c6a-b768-18f673f5af45", 00:20:09.422 "is_configured": true, 00:20:09.422 "data_offset": 2048, 00:20:09.422 "data_size": 63488 00:20:09.422 }, 00:20:09.422 { 00:20:09.422 "name": "BaseBdev2", 00:20:09.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.422 "is_configured": false, 00:20:09.422 "data_offset": 0, 00:20:09.422 "data_size": 0 00:20:09.422 }, 00:20:09.422 { 00:20:09.422 "name": "BaseBdev3", 00:20:09.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.422 "is_configured": false, 00:20:09.422 "data_offset": 0, 00:20:09.422 "data_size": 0 00:20:09.422 } 00:20:09.422 ] 00:20:09.422 }' 00:20:09.422 17:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.422 17:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.680 [2024-11-08 17:07:46.231958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:09.680 BaseBdev2 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.680 [ 00:20:09.680 { 00:20:09.680 "name": "BaseBdev2", 00:20:09.680 "aliases": [ 00:20:09.680 "d43d4a94-6d1d-4ad7-8cca-d75599cce2fd" 00:20:09.680 ], 00:20:09.680 "product_name": "Malloc disk", 00:20:09.680 "block_size": 512, 00:20:09.680 "num_blocks": 65536, 00:20:09.680 "uuid": "d43d4a94-6d1d-4ad7-8cca-d75599cce2fd", 00:20:09.680 "assigned_rate_limits": { 00:20:09.680 "rw_ios_per_sec": 0, 00:20:09.680 "rw_mbytes_per_sec": 0, 00:20:09.680 "r_mbytes_per_sec": 0, 00:20:09.680 "w_mbytes_per_sec": 0 00:20:09.680 }, 00:20:09.680 "claimed": true, 00:20:09.680 "claim_type": "exclusive_write", 00:20:09.680 "zoned": false, 00:20:09.680 "supported_io_types": { 00:20:09.680 "read": true, 00:20:09.680 "write": true, 00:20:09.680 "unmap": true, 00:20:09.680 "flush": true, 00:20:09.680 "reset": true, 00:20:09.680 "nvme_admin": false, 00:20:09.680 "nvme_io": false, 00:20:09.680 "nvme_io_md": false, 00:20:09.680 "write_zeroes": true, 00:20:09.680 "zcopy": true, 00:20:09.680 "get_zone_info": false, 00:20:09.680 "zone_management": false, 00:20:09.680 "zone_append": false, 00:20:09.680 "compare": false, 00:20:09.680 "compare_and_write": false, 00:20:09.680 "abort": true, 00:20:09.680 "seek_hole": false, 00:20:09.680 "seek_data": false, 00:20:09.680 "copy": true, 00:20:09.680 "nvme_iov_md": false 00:20:09.680 }, 00:20:09.680 "memory_domains": [ 00:20:09.680 { 00:20:09.680 "dma_device_id": "system", 00:20:09.680 "dma_device_type": 1 00:20:09.680 }, 00:20:09.680 { 00:20:09.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.680 "dma_device_type": 2 00:20:09.680 } 00:20:09.680 ], 00:20:09.680 "driver_specific": {} 00:20:09.680 } 00:20:09.680 ] 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.680 "name": "Existed_Raid", 00:20:09.680 "uuid": "75cf126c-5976-4a80-968d-374b1b7eb018", 00:20:09.680 "strip_size_kb": 64, 00:20:09.680 "state": "configuring", 00:20:09.680 "raid_level": "concat", 00:20:09.680 "superblock": true, 00:20:09.680 "num_base_bdevs": 3, 00:20:09.680 "num_base_bdevs_discovered": 2, 00:20:09.680 "num_base_bdevs_operational": 3, 00:20:09.680 "base_bdevs_list": [ 00:20:09.680 { 00:20:09.680 "name": "BaseBdev1", 00:20:09.680 "uuid": "8c5b50ff-cfbe-4c6a-b768-18f673f5af45", 00:20:09.680 "is_configured": true, 00:20:09.680 "data_offset": 2048, 00:20:09.680 "data_size": 63488 00:20:09.680 }, 00:20:09.680 { 00:20:09.680 "name": "BaseBdev2", 00:20:09.680 "uuid": "d43d4a94-6d1d-4ad7-8cca-d75599cce2fd", 00:20:09.680 "is_configured": true, 00:20:09.680 "data_offset": 2048, 00:20:09.680 "data_size": 63488 00:20:09.680 }, 00:20:09.680 { 00:20:09.680 "name": "BaseBdev3", 00:20:09.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.680 "is_configured": false, 00:20:09.680 "data_offset": 0, 00:20:09.680 "data_size": 0 00:20:09.680 } 00:20:09.680 ] 00:20:09.680 }' 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.680 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.938 [2024-11-08 17:07:46.614162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:09.938 [2024-11-08 17:07:46.614583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:09.938 [2024-11-08 17:07:46.614686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:09.938 [2024-11-08 17:07:46.615002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:09.938 BaseBdev3 00:20:09.938 [2024-11-08 17:07:46.615212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:09.938 [2024-11-08 17:07:46.615226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:09.938 [2024-11-08 17:07:46.615366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.938 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.938 [ 00:20:09.938 { 00:20:09.938 "name": "BaseBdev3", 00:20:09.938 "aliases": [ 00:20:09.938 "1e5b20b6-0438-4135-adfc-645f862a4e05" 00:20:09.938 ], 00:20:09.938 "product_name": "Malloc disk", 00:20:09.938 "block_size": 512, 00:20:09.938 "num_blocks": 65536, 00:20:09.938 "uuid": "1e5b20b6-0438-4135-adfc-645f862a4e05", 00:20:09.938 "assigned_rate_limits": { 00:20:09.938 "rw_ios_per_sec": 0, 00:20:09.938 "rw_mbytes_per_sec": 0, 00:20:09.938 "r_mbytes_per_sec": 0, 00:20:09.938 "w_mbytes_per_sec": 0 00:20:09.938 }, 00:20:09.938 "claimed": true, 00:20:09.938 "claim_type": "exclusive_write", 00:20:09.938 "zoned": false, 00:20:09.938 "supported_io_types": { 00:20:09.938 "read": true, 00:20:09.938 "write": true, 00:20:09.938 "unmap": true, 00:20:09.938 "flush": true, 00:20:09.938 "reset": true, 00:20:09.938 "nvme_admin": false, 00:20:09.938 "nvme_io": false, 00:20:09.938 "nvme_io_md": false, 00:20:09.938 "write_zeroes": true, 00:20:09.938 "zcopy": true, 00:20:09.938 "get_zone_info": false, 00:20:09.938 "zone_management": false, 00:20:09.938 "zone_append": false, 00:20:09.938 "compare": false, 00:20:09.938 "compare_and_write": false, 00:20:09.939 "abort": true, 00:20:09.939 "seek_hole": false, 00:20:09.939 "seek_data": false, 00:20:09.939 "copy": true, 00:20:09.939 "nvme_iov_md": false 00:20:09.939 }, 00:20:09.939 "memory_domains": [ 00:20:09.939 { 00:20:09.939 "dma_device_id": "system", 00:20:09.939 "dma_device_type": 1 00:20:09.939 }, 00:20:09.939 { 00:20:09.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.939 "dma_device_type": 2 00:20:09.939 } 00:20:09.939 ], 00:20:09.939 "driver_specific": {} 00:20:09.939 } 00:20:09.939 ] 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.939 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.196 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.196 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.196 "name": "Existed_Raid", 00:20:10.196 "uuid": "75cf126c-5976-4a80-968d-374b1b7eb018", 00:20:10.196 "strip_size_kb": 64, 00:20:10.196 "state": "online", 00:20:10.196 "raid_level": "concat", 00:20:10.196 "superblock": true, 00:20:10.196 "num_base_bdevs": 3, 00:20:10.196 "num_base_bdevs_discovered": 3, 00:20:10.196 "num_base_bdevs_operational": 3, 00:20:10.196 "base_bdevs_list": [ 00:20:10.196 { 00:20:10.196 "name": "BaseBdev1", 00:20:10.196 "uuid": "8c5b50ff-cfbe-4c6a-b768-18f673f5af45", 00:20:10.196 "is_configured": true, 00:20:10.196 "data_offset": 2048, 00:20:10.196 "data_size": 63488 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "name": "BaseBdev2", 00:20:10.196 "uuid": "d43d4a94-6d1d-4ad7-8cca-d75599cce2fd", 00:20:10.196 "is_configured": true, 00:20:10.196 "data_offset": 2048, 00:20:10.196 "data_size": 63488 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "name": "BaseBdev3", 00:20:10.196 "uuid": "1e5b20b6-0438-4135-adfc-645f862a4e05", 00:20:10.196 "is_configured": true, 00:20:10.196 "data_offset": 2048, 00:20:10.196 "data_size": 63488 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }' 00:20:10.196 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.196 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 17:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:10.473 [2024-11-08 17:07:46.998655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.473 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.473 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.474 "name": "Existed_Raid", 00:20:10.474 "aliases": [ 00:20:10.474 "75cf126c-5976-4a80-968d-374b1b7eb018" 00:20:10.474 ], 00:20:10.474 "product_name": "Raid Volume", 00:20:10.474 "block_size": 512, 00:20:10.474 "num_blocks": 190464, 00:20:10.474 "uuid": "75cf126c-5976-4a80-968d-374b1b7eb018", 00:20:10.474 "assigned_rate_limits": { 00:20:10.474 "rw_ios_per_sec": 0, 00:20:10.474 "rw_mbytes_per_sec": 0, 00:20:10.474 "r_mbytes_per_sec": 0, 00:20:10.474 "w_mbytes_per_sec": 0 00:20:10.474 }, 00:20:10.474 "claimed": false, 00:20:10.474 "zoned": false, 00:20:10.474 "supported_io_types": { 00:20:10.474 "read": true, 00:20:10.474 "write": true, 00:20:10.474 "unmap": true, 00:20:10.474 "flush": true, 00:20:10.474 "reset": true, 00:20:10.474 "nvme_admin": false, 00:20:10.474 "nvme_io": false, 00:20:10.474 "nvme_io_md": false, 00:20:10.474 "write_zeroes": true, 00:20:10.474 "zcopy": false, 00:20:10.474 "get_zone_info": false, 00:20:10.474 "zone_management": false, 00:20:10.474 "zone_append": false, 00:20:10.474 "compare": false, 00:20:10.474 "compare_and_write": false, 00:20:10.474 "abort": false, 00:20:10.474 "seek_hole": false, 00:20:10.474 "seek_data": false, 00:20:10.474 "copy": false, 00:20:10.474 "nvme_iov_md": false 00:20:10.474 }, 00:20:10.474 "memory_domains": [ 00:20:10.474 { 00:20:10.474 "dma_device_id": "system", 00:20:10.474 "dma_device_type": 1 00:20:10.474 }, 00:20:10.474 { 00:20:10.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.474 "dma_device_type": 2 00:20:10.474 }, 00:20:10.474 { 00:20:10.474 "dma_device_id": "system", 00:20:10.474 "dma_device_type": 1 00:20:10.474 }, 00:20:10.474 { 00:20:10.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.474 "dma_device_type": 2 00:20:10.474 }, 00:20:10.474 { 00:20:10.474 "dma_device_id": "system", 00:20:10.474 "dma_device_type": 1 00:20:10.474 }, 00:20:10.474 { 00:20:10.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.474 "dma_device_type": 2 00:20:10.474 } 00:20:10.474 ], 00:20:10.474 "driver_specific": { 00:20:10.474 "raid": { 00:20:10.474 "uuid": "75cf126c-5976-4a80-968d-374b1b7eb018", 00:20:10.474 "strip_size_kb": 64, 00:20:10.474 "state": "online", 00:20:10.474 "raid_level": "concat", 00:20:10.474 "superblock": true, 00:20:10.474 "num_base_bdevs": 3, 00:20:10.474 "num_base_bdevs_discovered": 3, 00:20:10.474 "num_base_bdevs_operational": 3, 00:20:10.474 "base_bdevs_list": [ 00:20:10.474 { 00:20:10.474 "name": "BaseBdev1", 00:20:10.474 "uuid": "8c5b50ff-cfbe-4c6a-b768-18f673f5af45", 00:20:10.474 "is_configured": true, 00:20:10.474 "data_offset": 2048, 00:20:10.474 "data_size": 63488 00:20:10.474 }, 00:20:10.474 { 00:20:10.474 "name": "BaseBdev2", 00:20:10.474 "uuid": "d43d4a94-6d1d-4ad7-8cca-d75599cce2fd", 00:20:10.474 "is_configured": true, 00:20:10.474 "data_offset": 2048, 00:20:10.474 "data_size": 63488 00:20:10.474 }, 00:20:10.474 { 00:20:10.474 "name": "BaseBdev3", 00:20:10.474 "uuid": "1e5b20b6-0438-4135-adfc-645f862a4e05", 00:20:10.474 "is_configured": true, 00:20:10.474 "data_offset": 2048, 00:20:10.474 "data_size": 63488 00:20:10.474 } 00:20:10.474 ] 00:20:10.474 } 00:20:10.474 } 00:20:10.474 }' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:10.474 BaseBdev2 00:20:10.474 BaseBdev3' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.474 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 [2024-11-08 17:07:47.182424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.474 [2024-11-08 17:07:47.182546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.474 [2024-11-08 17:07:47.182653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.797 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.798 "name": "Existed_Raid", 00:20:10.798 "uuid": "75cf126c-5976-4a80-968d-374b1b7eb018", 00:20:10.798 "strip_size_kb": 64, 00:20:10.798 "state": "offline", 00:20:10.798 "raid_level": "concat", 00:20:10.798 "superblock": true, 00:20:10.798 "num_base_bdevs": 3, 00:20:10.798 "num_base_bdevs_discovered": 2, 00:20:10.798 "num_base_bdevs_operational": 2, 00:20:10.798 "base_bdevs_list": [ 00:20:10.798 { 00:20:10.798 "name": null, 00:20:10.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.798 "is_configured": false, 00:20:10.798 "data_offset": 0, 00:20:10.798 "data_size": 63488 00:20:10.798 }, 00:20:10.798 { 00:20:10.798 "name": "BaseBdev2", 00:20:10.798 "uuid": "d43d4a94-6d1d-4ad7-8cca-d75599cce2fd", 00:20:10.798 "is_configured": true, 00:20:10.798 "data_offset": 2048, 00:20:10.798 "data_size": 63488 00:20:10.798 }, 00:20:10.798 { 00:20:10.798 "name": "BaseBdev3", 00:20:10.798 "uuid": "1e5b20b6-0438-4135-adfc-645f862a4e05", 00:20:10.798 "is_configured": true, 00:20:10.798 "data_offset": 2048, 00:20:10.798 "data_size": 63488 00:20:10.798 } 00:20:10.798 ] 00:20:10.798 }' 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.798 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.055 [2024-11-08 17:07:47.605045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.055 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.055 [2024-11-08 17:07:47.707818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:11.055 [2024-11-08 17:07:47.707975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.313 BaseBdev2 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.313 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.313 [ 00:20:11.313 { 00:20:11.313 "name": "BaseBdev2", 00:20:11.313 "aliases": [ 00:20:11.313 "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b" 00:20:11.313 ], 00:20:11.313 "product_name": "Malloc disk", 00:20:11.313 "block_size": 512, 00:20:11.313 "num_blocks": 65536, 00:20:11.313 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:11.313 "assigned_rate_limits": { 00:20:11.313 "rw_ios_per_sec": 0, 00:20:11.313 "rw_mbytes_per_sec": 0, 00:20:11.313 "r_mbytes_per_sec": 0, 00:20:11.313 "w_mbytes_per_sec": 0 00:20:11.313 }, 00:20:11.314 "claimed": false, 00:20:11.314 "zoned": false, 00:20:11.314 "supported_io_types": { 00:20:11.314 "read": true, 00:20:11.314 "write": true, 00:20:11.314 "unmap": true, 00:20:11.314 "flush": true, 00:20:11.314 "reset": true, 00:20:11.314 "nvme_admin": false, 00:20:11.314 "nvme_io": false, 00:20:11.314 "nvme_io_md": false, 00:20:11.314 "write_zeroes": true, 00:20:11.314 "zcopy": true, 00:20:11.314 "get_zone_info": false, 00:20:11.314 "zone_management": false, 00:20:11.314 "zone_append": false, 00:20:11.314 "compare": false, 00:20:11.314 "compare_and_write": false, 00:20:11.314 "abort": true, 00:20:11.314 "seek_hole": false, 00:20:11.314 "seek_data": false, 00:20:11.314 "copy": true, 00:20:11.314 "nvme_iov_md": false 00:20:11.314 }, 00:20:11.314 "memory_domains": [ 00:20:11.314 { 00:20:11.314 "dma_device_id": "system", 00:20:11.314 "dma_device_type": 1 00:20:11.314 }, 00:20:11.314 { 00:20:11.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.314 "dma_device_type": 2 00:20:11.314 } 00:20:11.314 ], 00:20:11.314 "driver_specific": {} 00:20:11.314 } 00:20:11.314 ] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.314 BaseBdev3 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.314 [ 00:20:11.314 { 00:20:11.314 "name": "BaseBdev3", 00:20:11.314 "aliases": [ 00:20:11.314 "34e22f7b-2550-4209-aab0-6ed6b66e2c58" 00:20:11.314 ], 00:20:11.314 "product_name": "Malloc disk", 00:20:11.314 "block_size": 512, 00:20:11.314 "num_blocks": 65536, 00:20:11.314 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:11.314 "assigned_rate_limits": { 00:20:11.314 "rw_ios_per_sec": 0, 00:20:11.314 "rw_mbytes_per_sec": 0, 00:20:11.314 "r_mbytes_per_sec": 0, 00:20:11.314 "w_mbytes_per_sec": 0 00:20:11.314 }, 00:20:11.314 "claimed": false, 00:20:11.314 "zoned": false, 00:20:11.314 "supported_io_types": { 00:20:11.314 "read": true, 00:20:11.314 "write": true, 00:20:11.314 "unmap": true, 00:20:11.314 "flush": true, 00:20:11.314 "reset": true, 00:20:11.314 "nvme_admin": false, 00:20:11.314 "nvme_io": false, 00:20:11.314 "nvme_io_md": false, 00:20:11.314 "write_zeroes": true, 00:20:11.314 "zcopy": true, 00:20:11.314 "get_zone_info": false, 00:20:11.314 "zone_management": false, 00:20:11.314 "zone_append": false, 00:20:11.314 "compare": false, 00:20:11.314 "compare_and_write": false, 00:20:11.314 "abort": true, 00:20:11.314 "seek_hole": false, 00:20:11.314 "seek_data": false, 00:20:11.314 "copy": true, 00:20:11.314 "nvme_iov_md": false 00:20:11.314 }, 00:20:11.314 "memory_domains": [ 00:20:11.314 { 00:20:11.314 "dma_device_id": "system", 00:20:11.314 "dma_device_type": 1 00:20:11.314 }, 00:20:11.314 { 00:20:11.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.314 "dma_device_type": 2 00:20:11.314 } 00:20:11.314 ], 00:20:11.314 "driver_specific": {} 00:20:11.314 } 00:20:11.314 ] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.314 [2024-11-08 17:07:47.916093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.314 [2024-11-08 17:07:47.916234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.314 [2024-11-08 17:07:47.916309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.314 [2024-11-08 17:07:47.921087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.314 "name": "Existed_Raid", 00:20:11.314 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:11.314 "strip_size_kb": 64, 00:20:11.314 "state": "configuring", 00:20:11.314 "raid_level": "concat", 00:20:11.314 "superblock": true, 00:20:11.314 "num_base_bdevs": 3, 00:20:11.314 "num_base_bdevs_discovered": 2, 00:20:11.314 "num_base_bdevs_operational": 3, 00:20:11.314 "base_bdevs_list": [ 00:20:11.314 { 00:20:11.314 "name": "BaseBdev1", 00:20:11.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.314 "is_configured": false, 00:20:11.314 "data_offset": 0, 00:20:11.314 "data_size": 0 00:20:11.314 }, 00:20:11.314 { 00:20:11.314 "name": "BaseBdev2", 00:20:11.314 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:11.314 "is_configured": true, 00:20:11.314 "data_offset": 2048, 00:20:11.314 "data_size": 63488 00:20:11.314 }, 00:20:11.314 { 00:20:11.314 "name": "BaseBdev3", 00:20:11.314 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:11.314 "is_configured": true, 00:20:11.314 "data_offset": 2048, 00:20:11.314 "data_size": 63488 00:20:11.314 } 00:20:11.314 ] 00:20:11.314 }' 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.314 17:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.572 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:11.572 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.572 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.572 [2024-11-08 17:07:48.281678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.829 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.829 "name": "Existed_Raid", 00:20:11.829 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:11.829 "strip_size_kb": 64, 00:20:11.829 "state": "configuring", 00:20:11.829 "raid_level": "concat", 00:20:11.829 "superblock": true, 00:20:11.829 "num_base_bdevs": 3, 00:20:11.829 "num_base_bdevs_discovered": 1, 00:20:11.829 "num_base_bdevs_operational": 3, 00:20:11.829 "base_bdevs_list": [ 00:20:11.829 { 00:20:11.829 "name": "BaseBdev1", 00:20:11.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.829 "is_configured": false, 00:20:11.829 "data_offset": 0, 00:20:11.829 "data_size": 0 00:20:11.829 }, 00:20:11.829 { 00:20:11.829 "name": null, 00:20:11.829 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:11.829 "is_configured": false, 00:20:11.829 "data_offset": 0, 00:20:11.829 "data_size": 63488 00:20:11.829 }, 00:20:11.829 { 00:20:11.829 "name": "BaseBdev3", 00:20:11.830 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:11.830 "is_configured": true, 00:20:11.830 "data_offset": 2048, 00:20:11.830 "data_size": 63488 00:20:11.830 } 00:20:11.830 ] 00:20:11.830 }' 00:20:11.830 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.830 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.087 [2024-11-08 17:07:48.687158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:12.087 BaseBdev1 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.087 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.087 [ 00:20:12.087 { 00:20:12.087 "name": "BaseBdev1", 00:20:12.087 "aliases": [ 00:20:12.087 "214cccdc-d81f-4267-8667-96e888ed7e7b" 00:20:12.087 ], 00:20:12.087 "product_name": "Malloc disk", 00:20:12.087 "block_size": 512, 00:20:12.087 "num_blocks": 65536, 00:20:12.087 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:12.087 "assigned_rate_limits": { 00:20:12.088 "rw_ios_per_sec": 0, 00:20:12.088 "rw_mbytes_per_sec": 0, 00:20:12.088 "r_mbytes_per_sec": 0, 00:20:12.088 "w_mbytes_per_sec": 0 00:20:12.088 }, 00:20:12.088 "claimed": true, 00:20:12.088 "claim_type": "exclusive_write", 00:20:12.088 "zoned": false, 00:20:12.088 "supported_io_types": { 00:20:12.088 "read": true, 00:20:12.088 "write": true, 00:20:12.088 "unmap": true, 00:20:12.088 "flush": true, 00:20:12.088 "reset": true, 00:20:12.088 "nvme_admin": false, 00:20:12.088 "nvme_io": false, 00:20:12.088 "nvme_io_md": false, 00:20:12.088 "write_zeroes": true, 00:20:12.088 "zcopy": true, 00:20:12.088 "get_zone_info": false, 00:20:12.088 "zone_management": false, 00:20:12.088 "zone_append": false, 00:20:12.088 "compare": false, 00:20:12.088 "compare_and_write": false, 00:20:12.088 "abort": true, 00:20:12.088 "seek_hole": false, 00:20:12.088 "seek_data": false, 00:20:12.088 "copy": true, 00:20:12.088 "nvme_iov_md": false 00:20:12.088 }, 00:20:12.088 "memory_domains": [ 00:20:12.088 { 00:20:12.088 "dma_device_id": "system", 00:20:12.088 "dma_device_type": 1 00:20:12.088 }, 00:20:12.088 { 00:20:12.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.088 "dma_device_type": 2 00:20:12.088 } 00:20:12.088 ], 00:20:12.088 "driver_specific": {} 00:20:12.088 } 00:20:12.088 ] 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.088 "name": "Existed_Raid", 00:20:12.088 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:12.088 "strip_size_kb": 64, 00:20:12.088 "state": "configuring", 00:20:12.088 "raid_level": "concat", 00:20:12.088 "superblock": true, 00:20:12.088 "num_base_bdevs": 3, 00:20:12.088 "num_base_bdevs_discovered": 2, 00:20:12.088 "num_base_bdevs_operational": 3, 00:20:12.088 "base_bdevs_list": [ 00:20:12.088 { 00:20:12.088 "name": "BaseBdev1", 00:20:12.088 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:12.088 "is_configured": true, 00:20:12.088 "data_offset": 2048, 00:20:12.088 "data_size": 63488 00:20:12.088 }, 00:20:12.088 { 00:20:12.088 "name": null, 00:20:12.088 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:12.088 "is_configured": false, 00:20:12.088 "data_offset": 0, 00:20:12.088 "data_size": 63488 00:20:12.088 }, 00:20:12.088 { 00:20:12.088 "name": "BaseBdev3", 00:20:12.088 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:12.088 "is_configured": true, 00:20:12.088 "data_offset": 2048, 00:20:12.088 "data_size": 63488 00:20:12.088 } 00:20:12.088 ] 00:20:12.088 }' 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.088 17:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.345 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.345 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:12.345 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.345 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.602 [2024-11-08 17:07:49.091335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.602 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.603 "name": "Existed_Raid", 00:20:12.603 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:12.603 "strip_size_kb": 64, 00:20:12.603 "state": "configuring", 00:20:12.603 "raid_level": "concat", 00:20:12.603 "superblock": true, 00:20:12.603 "num_base_bdevs": 3, 00:20:12.603 "num_base_bdevs_discovered": 1, 00:20:12.603 "num_base_bdevs_operational": 3, 00:20:12.603 "base_bdevs_list": [ 00:20:12.603 { 00:20:12.603 "name": "BaseBdev1", 00:20:12.603 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:12.603 "is_configured": true, 00:20:12.603 "data_offset": 2048, 00:20:12.603 "data_size": 63488 00:20:12.603 }, 00:20:12.603 { 00:20:12.603 "name": null, 00:20:12.603 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:12.603 "is_configured": false, 00:20:12.603 "data_offset": 0, 00:20:12.603 "data_size": 63488 00:20:12.603 }, 00:20:12.603 { 00:20:12.603 "name": null, 00:20:12.603 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:12.603 "is_configured": false, 00:20:12.603 "data_offset": 0, 00:20:12.603 "data_size": 63488 00:20:12.603 } 00:20:12.603 ] 00:20:12.603 }' 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.603 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.860 [2024-11-08 17:07:49.459478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.860 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.860 "name": "Existed_Raid", 00:20:12.860 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:12.860 "strip_size_kb": 64, 00:20:12.860 "state": "configuring", 00:20:12.860 "raid_level": "concat", 00:20:12.860 "superblock": true, 00:20:12.860 "num_base_bdevs": 3, 00:20:12.860 "num_base_bdevs_discovered": 2, 00:20:12.860 "num_base_bdevs_operational": 3, 00:20:12.860 "base_bdevs_list": [ 00:20:12.860 { 00:20:12.860 "name": "BaseBdev1", 00:20:12.860 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:12.860 "is_configured": true, 00:20:12.860 "data_offset": 2048, 00:20:12.860 "data_size": 63488 00:20:12.860 }, 00:20:12.860 { 00:20:12.861 "name": null, 00:20:12.861 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:12.861 "is_configured": false, 00:20:12.861 "data_offset": 0, 00:20:12.861 "data_size": 63488 00:20:12.861 }, 00:20:12.861 { 00:20:12.861 "name": "BaseBdev3", 00:20:12.861 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:12.861 "is_configured": true, 00:20:12.861 "data_offset": 2048, 00:20:12.861 "data_size": 63488 00:20:12.861 } 00:20:12.861 ] 00:20:12.861 }' 00:20:12.861 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.861 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.427 [2024-11-08 17:07:49.871592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.427 "name": "Existed_Raid", 00:20:13.427 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:13.427 "strip_size_kb": 64, 00:20:13.427 "state": "configuring", 00:20:13.427 "raid_level": "concat", 00:20:13.427 "superblock": true, 00:20:13.427 "num_base_bdevs": 3, 00:20:13.427 "num_base_bdevs_discovered": 1, 00:20:13.427 "num_base_bdevs_operational": 3, 00:20:13.427 "base_bdevs_list": [ 00:20:13.427 { 00:20:13.427 "name": null, 00:20:13.427 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:13.427 "is_configured": false, 00:20:13.427 "data_offset": 0, 00:20:13.427 "data_size": 63488 00:20:13.427 }, 00:20:13.427 { 00:20:13.427 "name": null, 00:20:13.427 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:13.427 "is_configured": false, 00:20:13.427 "data_offset": 0, 00:20:13.427 "data_size": 63488 00:20:13.427 }, 00:20:13.427 { 00:20:13.427 "name": "BaseBdev3", 00:20:13.427 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:13.427 "is_configured": true, 00:20:13.427 "data_offset": 2048, 00:20:13.427 "data_size": 63488 00:20:13.427 } 00:20:13.427 ] 00:20:13.427 }' 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.427 17:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.729 [2024-11-08 17:07:50.299453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.729 "name": "Existed_Raid", 00:20:13.729 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:13.729 "strip_size_kb": 64, 00:20:13.729 "state": "configuring", 00:20:13.729 "raid_level": "concat", 00:20:13.729 "superblock": true, 00:20:13.729 "num_base_bdevs": 3, 00:20:13.729 "num_base_bdevs_discovered": 2, 00:20:13.729 "num_base_bdevs_operational": 3, 00:20:13.729 "base_bdevs_list": [ 00:20:13.729 { 00:20:13.729 "name": null, 00:20:13.729 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:13.729 "is_configured": false, 00:20:13.729 "data_offset": 0, 00:20:13.729 "data_size": 63488 00:20:13.729 }, 00:20:13.729 { 00:20:13.729 "name": "BaseBdev2", 00:20:13.729 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:13.729 "is_configured": true, 00:20:13.729 "data_offset": 2048, 00:20:13.729 "data_size": 63488 00:20:13.729 }, 00:20:13.729 { 00:20:13.729 "name": "BaseBdev3", 00:20:13.729 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:13.729 "is_configured": true, 00:20:13.729 "data_offset": 2048, 00:20:13.729 "data_size": 63488 00:20:13.729 } 00:20:13.729 ] 00:20:13.729 }' 00:20:13.729 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.730 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 214cccdc-d81f-4267-8667-96e888ed7e7b 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.988 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.247 [2024-11-08 17:07:50.728689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:14.247 NewBaseBdev 00:20:14.247 [2024-11-08 17:07:50.729078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:14.247 [2024-11-08 17:07:50.729101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:14.247 [2024-11-08 17:07:50.729367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:14.247 [2024-11-08 17:07:50.729501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:14.247 [2024-11-08 17:07:50.729509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:14.247 [2024-11-08 17:07:50.729640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.247 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.247 [ 00:20:14.248 { 00:20:14.248 "name": "NewBaseBdev", 00:20:14.248 "aliases": [ 00:20:14.248 "214cccdc-d81f-4267-8667-96e888ed7e7b" 00:20:14.248 ], 00:20:14.248 "product_name": "Malloc disk", 00:20:14.248 "block_size": 512, 00:20:14.248 "num_blocks": 65536, 00:20:14.248 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:14.248 "assigned_rate_limits": { 00:20:14.248 "rw_ios_per_sec": 0, 00:20:14.248 "rw_mbytes_per_sec": 0, 00:20:14.248 "r_mbytes_per_sec": 0, 00:20:14.248 "w_mbytes_per_sec": 0 00:20:14.248 }, 00:20:14.248 "claimed": true, 00:20:14.248 "claim_type": "exclusive_write", 00:20:14.248 "zoned": false, 00:20:14.248 "supported_io_types": { 00:20:14.248 "read": true, 00:20:14.248 "write": true, 00:20:14.248 "unmap": true, 00:20:14.248 "flush": true, 00:20:14.248 "reset": true, 00:20:14.248 "nvme_admin": false, 00:20:14.248 "nvme_io": false, 00:20:14.248 "nvme_io_md": false, 00:20:14.248 "write_zeroes": true, 00:20:14.248 "zcopy": true, 00:20:14.248 "get_zone_info": false, 00:20:14.248 "zone_management": false, 00:20:14.248 "zone_append": false, 00:20:14.248 "compare": false, 00:20:14.248 "compare_and_write": false, 00:20:14.248 "abort": true, 00:20:14.248 "seek_hole": false, 00:20:14.248 "seek_data": false, 00:20:14.248 "copy": true, 00:20:14.248 "nvme_iov_md": false 00:20:14.248 }, 00:20:14.248 "memory_domains": [ 00:20:14.248 { 00:20:14.248 "dma_device_id": "system", 00:20:14.248 "dma_device_type": 1 00:20:14.248 }, 00:20:14.248 { 00:20:14.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.248 "dma_device_type": 2 00:20:14.248 } 00:20:14.248 ], 00:20:14.248 "driver_specific": {} 00:20:14.248 } 00:20:14.248 ] 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.248 "name": "Existed_Raid", 00:20:14.248 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:14.248 "strip_size_kb": 64, 00:20:14.248 "state": "online", 00:20:14.248 "raid_level": "concat", 00:20:14.248 "superblock": true, 00:20:14.248 "num_base_bdevs": 3, 00:20:14.248 "num_base_bdevs_discovered": 3, 00:20:14.248 "num_base_bdevs_operational": 3, 00:20:14.248 "base_bdevs_list": [ 00:20:14.248 { 00:20:14.248 "name": "NewBaseBdev", 00:20:14.248 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:14.248 "is_configured": true, 00:20:14.248 "data_offset": 2048, 00:20:14.248 "data_size": 63488 00:20:14.248 }, 00:20:14.248 { 00:20:14.248 "name": "BaseBdev2", 00:20:14.248 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:14.248 "is_configured": true, 00:20:14.248 "data_offset": 2048, 00:20:14.248 "data_size": 63488 00:20:14.248 }, 00:20:14.248 { 00:20:14.248 "name": "BaseBdev3", 00:20:14.248 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:14.248 "is_configured": true, 00:20:14.248 "data_offset": 2048, 00:20:14.248 "data_size": 63488 00:20:14.248 } 00:20:14.248 ] 00:20:14.248 }' 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.248 17:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.506 [2024-11-08 17:07:51.077169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.506 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:14.506 "name": "Existed_Raid", 00:20:14.506 "aliases": [ 00:20:14.506 "41c54019-6c89-4363-87cf-dfa73ea024dd" 00:20:14.506 ], 00:20:14.506 "product_name": "Raid Volume", 00:20:14.506 "block_size": 512, 00:20:14.506 "num_blocks": 190464, 00:20:14.506 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:14.506 "assigned_rate_limits": { 00:20:14.506 "rw_ios_per_sec": 0, 00:20:14.506 "rw_mbytes_per_sec": 0, 00:20:14.506 "r_mbytes_per_sec": 0, 00:20:14.506 "w_mbytes_per_sec": 0 00:20:14.506 }, 00:20:14.506 "claimed": false, 00:20:14.506 "zoned": false, 00:20:14.506 "supported_io_types": { 00:20:14.506 "read": true, 00:20:14.506 "write": true, 00:20:14.506 "unmap": true, 00:20:14.506 "flush": true, 00:20:14.506 "reset": true, 00:20:14.506 "nvme_admin": false, 00:20:14.506 "nvme_io": false, 00:20:14.506 "nvme_io_md": false, 00:20:14.506 "write_zeroes": true, 00:20:14.506 "zcopy": false, 00:20:14.506 "get_zone_info": false, 00:20:14.506 "zone_management": false, 00:20:14.506 "zone_append": false, 00:20:14.506 "compare": false, 00:20:14.506 "compare_and_write": false, 00:20:14.506 "abort": false, 00:20:14.506 "seek_hole": false, 00:20:14.506 "seek_data": false, 00:20:14.506 "copy": false, 00:20:14.506 "nvme_iov_md": false 00:20:14.506 }, 00:20:14.506 "memory_domains": [ 00:20:14.506 { 00:20:14.506 "dma_device_id": "system", 00:20:14.506 "dma_device_type": 1 00:20:14.506 }, 00:20:14.506 { 00:20:14.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.506 "dma_device_type": 2 00:20:14.506 }, 00:20:14.507 { 00:20:14.507 "dma_device_id": "system", 00:20:14.507 "dma_device_type": 1 00:20:14.507 }, 00:20:14.507 { 00:20:14.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.507 "dma_device_type": 2 00:20:14.507 }, 00:20:14.507 { 00:20:14.507 "dma_device_id": "system", 00:20:14.507 "dma_device_type": 1 00:20:14.507 }, 00:20:14.507 { 00:20:14.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.507 "dma_device_type": 2 00:20:14.507 } 00:20:14.507 ], 00:20:14.507 "driver_specific": { 00:20:14.507 "raid": { 00:20:14.507 "uuid": "41c54019-6c89-4363-87cf-dfa73ea024dd", 00:20:14.507 "strip_size_kb": 64, 00:20:14.507 "state": "online", 00:20:14.507 "raid_level": "concat", 00:20:14.507 "superblock": true, 00:20:14.507 "num_base_bdevs": 3, 00:20:14.507 "num_base_bdevs_discovered": 3, 00:20:14.507 "num_base_bdevs_operational": 3, 00:20:14.507 "base_bdevs_list": [ 00:20:14.507 { 00:20:14.507 "name": "NewBaseBdev", 00:20:14.507 "uuid": "214cccdc-d81f-4267-8667-96e888ed7e7b", 00:20:14.507 "is_configured": true, 00:20:14.507 "data_offset": 2048, 00:20:14.507 "data_size": 63488 00:20:14.507 }, 00:20:14.507 { 00:20:14.507 "name": "BaseBdev2", 00:20:14.507 "uuid": "b4ac4a8a-4c3f-4675-abb4-1ef11c048b4b", 00:20:14.507 "is_configured": true, 00:20:14.507 "data_offset": 2048, 00:20:14.507 "data_size": 63488 00:20:14.507 }, 00:20:14.507 { 00:20:14.507 "name": "BaseBdev3", 00:20:14.507 "uuid": "34e22f7b-2550-4209-aab0-6ed6b66e2c58", 00:20:14.507 "is_configured": true, 00:20:14.507 "data_offset": 2048, 00:20:14.507 "data_size": 63488 00:20:14.507 } 00:20:14.507 ] 00:20:14.507 } 00:20:14.507 } 00:20:14.507 }' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:14.507 BaseBdev2 00:20:14.507 BaseBdev3' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:14.507 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.766 [2024-11-08 17:07:51.264882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:14.766 [2024-11-08 17:07:51.264996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.766 [2024-11-08 17:07:51.265137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.766 [2024-11-08 17:07:51.265231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.766 [2024-11-08 17:07:51.265274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65032 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 65032 ']' 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 65032 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65032 00:20:14.766 killing process with pid 65032 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65032' 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 65032 00:20:14.766 [2024-11-08 17:07:51.300909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.766 17:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 65032 00:20:15.024 [2024-11-08 17:07:51.498005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.589 17:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:15.589 00:20:15.589 real 0m8.215s 00:20:15.589 user 0m13.072s 00:20:15.589 sys 0m1.346s 00:20:15.589 ************************************ 00:20:15.589 END TEST raid_state_function_test_sb 00:20:15.589 ************************************ 00:20:15.589 17:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:15.589 17:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.847 17:07:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:20:15.847 17:07:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:15.847 17:07:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:15.847 17:07:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:15.847 ************************************ 00:20:15.847 START TEST raid_superblock_test 00:20:15.847 ************************************ 00:20:15.847 17:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:20:15.847 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:20:15.847 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:15.847 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:15.847 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:15.847 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:15.847 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65630 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65630 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65630 ']' 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:15.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.848 17:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.848 [2024-11-08 17:07:52.403989] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:15.848 [2024-11-08 17:07:52.404139] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65630 ] 00:20:16.105 [2024-11-08 17:07:52.564038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.105 [2024-11-08 17:07:52.681829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.363 [2024-11-08 17:07:52.842750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.363 [2024-11-08 17:07:52.842826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.621 malloc1 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.621 [2024-11-08 17:07:53.301124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:16.621 [2024-11-08 17:07:53.301354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.621 [2024-11-08 17:07:53.301423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:16.621 [2024-11-08 17:07:53.301622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.621 [2024-11-08 17:07:53.304307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.621 [2024-11-08 17:07:53.304346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:16.621 pt1 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.621 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.879 malloc2 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.879 [2024-11-08 17:07:53.345367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:16.879 [2024-11-08 17:07:53.345546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.879 [2024-11-08 17:07:53.345576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:16.879 [2024-11-08 17:07:53.345585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.879 [2024-11-08 17:07:53.347950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.879 [2024-11-08 17:07:53.347986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:16.879 pt2 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.879 malloc3 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.879 [2024-11-08 17:07:53.402293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:16.879 [2024-11-08 17:07:53.402468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.879 [2024-11-08 17:07:53.402516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:16.879 [2024-11-08 17:07:53.402578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.879 [2024-11-08 17:07:53.404921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.879 [2024-11-08 17:07:53.405039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:16.879 pt3 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.879 [2024-11-08 17:07:53.414362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.879 [2024-11-08 17:07:53.416451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.879 [2024-11-08 17:07:53.416605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:16.879 [2024-11-08 17:07:53.416830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:16.879 [2024-11-08 17:07:53.416904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:16.879 [2024-11-08 17:07:53.417217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:16.879 [2024-11-08 17:07:53.417442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:16.879 [2024-11-08 17:07:53.417456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:16.879 [2024-11-08 17:07:53.417622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.879 "name": "raid_bdev1", 00:20:16.879 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:16.879 "strip_size_kb": 64, 00:20:16.879 "state": "online", 00:20:16.879 "raid_level": "concat", 00:20:16.879 "superblock": true, 00:20:16.879 "num_base_bdevs": 3, 00:20:16.879 "num_base_bdevs_discovered": 3, 00:20:16.879 "num_base_bdevs_operational": 3, 00:20:16.879 "base_bdevs_list": [ 00:20:16.879 { 00:20:16.879 "name": "pt1", 00:20:16.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:16.879 "is_configured": true, 00:20:16.879 "data_offset": 2048, 00:20:16.879 "data_size": 63488 00:20:16.879 }, 00:20:16.879 { 00:20:16.879 "name": "pt2", 00:20:16.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.879 "is_configured": true, 00:20:16.879 "data_offset": 2048, 00:20:16.879 "data_size": 63488 00:20:16.879 }, 00:20:16.879 { 00:20:16.879 "name": "pt3", 00:20:16.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:16.879 "is_configured": true, 00:20:16.879 "data_offset": 2048, 00:20:16.879 "data_size": 63488 00:20:16.879 } 00:20:16.879 ] 00:20:16.879 }' 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.879 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:17.137 [2024-11-08 17:07:53.754744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.137 "name": "raid_bdev1", 00:20:17.137 "aliases": [ 00:20:17.137 "c21c5009-6346-46b7-a3dd-7ee09adc92eb" 00:20:17.137 ], 00:20:17.137 "product_name": "Raid Volume", 00:20:17.137 "block_size": 512, 00:20:17.137 "num_blocks": 190464, 00:20:17.137 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:17.137 "assigned_rate_limits": { 00:20:17.137 "rw_ios_per_sec": 0, 00:20:17.137 "rw_mbytes_per_sec": 0, 00:20:17.137 "r_mbytes_per_sec": 0, 00:20:17.137 "w_mbytes_per_sec": 0 00:20:17.137 }, 00:20:17.137 "claimed": false, 00:20:17.137 "zoned": false, 00:20:17.137 "supported_io_types": { 00:20:17.137 "read": true, 00:20:17.137 "write": true, 00:20:17.137 "unmap": true, 00:20:17.137 "flush": true, 00:20:17.137 "reset": true, 00:20:17.137 "nvme_admin": false, 00:20:17.137 "nvme_io": false, 00:20:17.137 "nvme_io_md": false, 00:20:17.137 "write_zeroes": true, 00:20:17.137 "zcopy": false, 00:20:17.137 "get_zone_info": false, 00:20:17.137 "zone_management": false, 00:20:17.137 "zone_append": false, 00:20:17.137 "compare": false, 00:20:17.137 "compare_and_write": false, 00:20:17.137 "abort": false, 00:20:17.137 "seek_hole": false, 00:20:17.137 "seek_data": false, 00:20:17.137 "copy": false, 00:20:17.137 "nvme_iov_md": false 00:20:17.137 }, 00:20:17.137 "memory_domains": [ 00:20:17.137 { 00:20:17.137 "dma_device_id": "system", 00:20:17.137 "dma_device_type": 1 00:20:17.137 }, 00:20:17.137 { 00:20:17.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.137 "dma_device_type": 2 00:20:17.137 }, 00:20:17.137 { 00:20:17.137 "dma_device_id": "system", 00:20:17.137 "dma_device_type": 1 00:20:17.137 }, 00:20:17.137 { 00:20:17.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.137 "dma_device_type": 2 00:20:17.137 }, 00:20:17.137 { 00:20:17.137 "dma_device_id": "system", 00:20:17.137 "dma_device_type": 1 00:20:17.137 }, 00:20:17.137 { 00:20:17.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.137 "dma_device_type": 2 00:20:17.137 } 00:20:17.137 ], 00:20:17.137 "driver_specific": { 00:20:17.137 "raid": { 00:20:17.137 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:17.137 "strip_size_kb": 64, 00:20:17.137 "state": "online", 00:20:17.137 "raid_level": "concat", 00:20:17.137 "superblock": true, 00:20:17.137 "num_base_bdevs": 3, 00:20:17.137 "num_base_bdevs_discovered": 3, 00:20:17.137 "num_base_bdevs_operational": 3, 00:20:17.137 "base_bdevs_list": [ 00:20:17.137 { 00:20:17.137 "name": "pt1", 00:20:17.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:17.137 "is_configured": true, 00:20:17.137 "data_offset": 2048, 00:20:17.137 "data_size": 63488 00:20:17.137 }, 00:20:17.137 { 00:20:17.137 "name": "pt2", 00:20:17.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.137 "is_configured": true, 00:20:17.137 "data_offset": 2048, 00:20:17.137 "data_size": 63488 00:20:17.137 }, 00:20:17.137 { 00:20:17.137 "name": "pt3", 00:20:17.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:17.137 "is_configured": true, 00:20:17.137 "data_offset": 2048, 00:20:17.137 "data_size": 63488 00:20:17.137 } 00:20:17.137 ] 00:20:17.137 } 00:20:17.137 } 00:20:17.137 }' 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:17.137 pt2 00:20:17.137 pt3' 00:20:17.137 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:17.395 [2024-11-08 17:07:53.966780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c21c5009-6346-46b7-a3dd-7ee09adc92eb 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c21c5009-6346-46b7-a3dd-7ee09adc92eb ']' 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.395 17:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.395 [2024-11-08 17:07:53.998458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.395 [2024-11-08 17:07:53.998600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.396 [2024-11-08 17:07:53.998746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.396 [2024-11-08 17:07:53.999438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.396 [2024-11-08 17:07:53.999464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:17.396 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.680 [2024-11-08 17:07:54.114530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:17.680 [2024-11-08 17:07:54.116644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:17.680 [2024-11-08 17:07:54.116807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:17.680 [2024-11-08 17:07:54.116888] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:17.680 [2024-11-08 17:07:54.117011] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:17.680 [2024-11-08 17:07:54.117151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:17.680 [2024-11-08 17:07:54.117196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.680 request: 00:20:17.680 { 00:20:17.680 [2024-11-08 17:07:54.117613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:17.680 "name": "raid_bdev1", 00:20:17.680 "raid_level": "concat", 00:20:17.680 "base_bdevs": [ 00:20:17.680 "malloc1", 00:20:17.680 "malloc2", 00:20:17.680 "malloc3" 00:20:17.680 ], 00:20:17.680 "strip_size_kb": 64, 00:20:17.680 "superblock": false, 00:20:17.680 "method": "bdev_raid_create", 00:20:17.680 "req_id": 1 00:20:17.680 } 00:20:17.680 Got JSON-RPC error response 00:20:17.680 response: 00:20:17.680 { 00:20:17.680 "code": -17, 00:20:17.680 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:17.680 } 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.680 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.680 [2024-11-08 17:07:54.154489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:17.680 [2024-11-08 17:07:54.154651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.680 [2024-11-08 17:07:54.154679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:17.680 [2024-11-08 17:07:54.154689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.681 [2024-11-08 17:07:54.157087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.681 [2024-11-08 17:07:54.157126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:17.681 [2024-11-08 17:07:54.157225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:17.681 [2024-11-08 17:07:54.157283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:17.681 pt1 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.681 "name": "raid_bdev1", 00:20:17.681 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:17.681 "strip_size_kb": 64, 00:20:17.681 "state": "configuring", 00:20:17.681 "raid_level": "concat", 00:20:17.681 "superblock": true, 00:20:17.681 "num_base_bdevs": 3, 00:20:17.681 "num_base_bdevs_discovered": 1, 00:20:17.681 "num_base_bdevs_operational": 3, 00:20:17.681 "base_bdevs_list": [ 00:20:17.681 { 00:20:17.681 "name": "pt1", 00:20:17.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:17.681 "is_configured": true, 00:20:17.681 "data_offset": 2048, 00:20:17.681 "data_size": 63488 00:20:17.681 }, 00:20:17.681 { 00:20:17.681 "name": null, 00:20:17.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.681 "is_configured": false, 00:20:17.681 "data_offset": 2048, 00:20:17.681 "data_size": 63488 00:20:17.681 }, 00:20:17.681 { 00:20:17.681 "name": null, 00:20:17.681 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:17.681 "is_configured": false, 00:20:17.681 "data_offset": 2048, 00:20:17.681 "data_size": 63488 00:20:17.681 } 00:20:17.681 ] 00:20:17.681 }' 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.681 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.940 [2024-11-08 17:07:54.482588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.940 [2024-11-08 17:07:54.482822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.940 [2024-11-08 17:07:54.482856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:17.940 [2024-11-08 17:07:54.482867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.940 [2024-11-08 17:07:54.483349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.940 [2024-11-08 17:07:54.483365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.940 [2024-11-08 17:07:54.483456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:17.940 [2024-11-08 17:07:54.483478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.940 pt2 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.940 [2024-11-08 17:07:54.490602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.940 "name": "raid_bdev1", 00:20:17.940 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:17.940 "strip_size_kb": 64, 00:20:17.940 "state": "configuring", 00:20:17.940 "raid_level": "concat", 00:20:17.940 "superblock": true, 00:20:17.940 "num_base_bdevs": 3, 00:20:17.940 "num_base_bdevs_discovered": 1, 00:20:17.940 "num_base_bdevs_operational": 3, 00:20:17.940 "base_bdevs_list": [ 00:20:17.940 { 00:20:17.940 "name": "pt1", 00:20:17.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:17.940 "is_configured": true, 00:20:17.940 "data_offset": 2048, 00:20:17.940 "data_size": 63488 00:20:17.940 }, 00:20:17.940 { 00:20:17.940 "name": null, 00:20:17.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.940 "is_configured": false, 00:20:17.940 "data_offset": 0, 00:20:17.940 "data_size": 63488 00:20:17.940 }, 00:20:17.940 { 00:20:17.940 "name": null, 00:20:17.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:17.940 "is_configured": false, 00:20:17.940 "data_offset": 2048, 00:20:17.940 "data_size": 63488 00:20:17.940 } 00:20:17.940 ] 00:20:17.940 }' 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.940 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.198 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:18.198 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:18.198 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:18.198 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.198 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.198 [2024-11-08 17:07:54.834640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:18.198 [2024-11-08 17:07:54.834858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.198 [2024-11-08 17:07:54.834902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:18.198 [2024-11-08 17:07:54.835394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.198 [2024-11-08 17:07:54.835952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.198 [2024-11-08 17:07:54.835980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:18.198 [2024-11-08 17:07:54.836071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:18.199 [2024-11-08 17:07:54.836096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:18.199 pt2 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.199 [2024-11-08 17:07:54.846639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:18.199 [2024-11-08 17:07:54.846687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.199 [2024-11-08 17:07:54.846702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:18.199 [2024-11-08 17:07:54.846713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.199 [2024-11-08 17:07:54.847150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.199 [2024-11-08 17:07:54.847169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:18.199 [2024-11-08 17:07:54.847241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:18.199 [2024-11-08 17:07:54.847263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:18.199 [2024-11-08 17:07:54.847388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:18.199 [2024-11-08 17:07:54.847399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:18.199 [2024-11-08 17:07:54.847646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:18.199 [2024-11-08 17:07:54.847803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:18.199 [2024-11-08 17:07:54.847812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:18.199 [2024-11-08 17:07:54.847947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.199 pt3 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.199 "name": "raid_bdev1", 00:20:18.199 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:18.199 "strip_size_kb": 64, 00:20:18.199 "state": "online", 00:20:18.199 "raid_level": "concat", 00:20:18.199 "superblock": true, 00:20:18.199 "num_base_bdevs": 3, 00:20:18.199 "num_base_bdevs_discovered": 3, 00:20:18.199 "num_base_bdevs_operational": 3, 00:20:18.199 "base_bdevs_list": [ 00:20:18.199 { 00:20:18.199 "name": "pt1", 00:20:18.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:18.199 "is_configured": true, 00:20:18.199 "data_offset": 2048, 00:20:18.199 "data_size": 63488 00:20:18.199 }, 00:20:18.199 { 00:20:18.199 "name": "pt2", 00:20:18.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:18.199 "is_configured": true, 00:20:18.199 "data_offset": 2048, 00:20:18.199 "data_size": 63488 00:20:18.199 }, 00:20:18.199 { 00:20:18.199 "name": "pt3", 00:20:18.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:18.199 "is_configured": true, 00:20:18.199 "data_offset": 2048, 00:20:18.199 "data_size": 63488 00:20:18.199 } 00:20:18.199 ] 00:20:18.199 }' 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.199 17:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.457 [2024-11-08 17:07:55.155094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.457 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.715 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.715 "name": "raid_bdev1", 00:20:18.715 "aliases": [ 00:20:18.715 "c21c5009-6346-46b7-a3dd-7ee09adc92eb" 00:20:18.715 ], 00:20:18.715 "product_name": "Raid Volume", 00:20:18.715 "block_size": 512, 00:20:18.715 "num_blocks": 190464, 00:20:18.715 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:18.715 "assigned_rate_limits": { 00:20:18.715 "rw_ios_per_sec": 0, 00:20:18.715 "rw_mbytes_per_sec": 0, 00:20:18.715 "r_mbytes_per_sec": 0, 00:20:18.715 "w_mbytes_per_sec": 0 00:20:18.715 }, 00:20:18.715 "claimed": false, 00:20:18.715 "zoned": false, 00:20:18.715 "supported_io_types": { 00:20:18.715 "read": true, 00:20:18.715 "write": true, 00:20:18.715 "unmap": true, 00:20:18.715 "flush": true, 00:20:18.715 "reset": true, 00:20:18.715 "nvme_admin": false, 00:20:18.715 "nvme_io": false, 00:20:18.715 "nvme_io_md": false, 00:20:18.715 "write_zeroes": true, 00:20:18.715 "zcopy": false, 00:20:18.715 "get_zone_info": false, 00:20:18.715 "zone_management": false, 00:20:18.715 "zone_append": false, 00:20:18.715 "compare": false, 00:20:18.715 "compare_and_write": false, 00:20:18.715 "abort": false, 00:20:18.715 "seek_hole": false, 00:20:18.715 "seek_data": false, 00:20:18.715 "copy": false, 00:20:18.715 "nvme_iov_md": false 00:20:18.715 }, 00:20:18.715 "memory_domains": [ 00:20:18.715 { 00:20:18.715 "dma_device_id": "system", 00:20:18.715 "dma_device_type": 1 00:20:18.715 }, 00:20:18.715 { 00:20:18.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.715 "dma_device_type": 2 00:20:18.715 }, 00:20:18.715 { 00:20:18.715 "dma_device_id": "system", 00:20:18.715 "dma_device_type": 1 00:20:18.715 }, 00:20:18.715 { 00:20:18.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.716 "dma_device_type": 2 00:20:18.716 }, 00:20:18.716 { 00:20:18.716 "dma_device_id": "system", 00:20:18.716 "dma_device_type": 1 00:20:18.716 }, 00:20:18.716 { 00:20:18.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.716 "dma_device_type": 2 00:20:18.716 } 00:20:18.716 ], 00:20:18.716 "driver_specific": { 00:20:18.716 "raid": { 00:20:18.716 "uuid": "c21c5009-6346-46b7-a3dd-7ee09adc92eb", 00:20:18.716 "strip_size_kb": 64, 00:20:18.716 "state": "online", 00:20:18.716 "raid_level": "concat", 00:20:18.716 "superblock": true, 00:20:18.716 "num_base_bdevs": 3, 00:20:18.716 "num_base_bdevs_discovered": 3, 00:20:18.716 "num_base_bdevs_operational": 3, 00:20:18.716 "base_bdevs_list": [ 00:20:18.716 { 00:20:18.716 "name": "pt1", 00:20:18.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:18.716 "is_configured": true, 00:20:18.716 "data_offset": 2048, 00:20:18.716 "data_size": 63488 00:20:18.716 }, 00:20:18.716 { 00:20:18.716 "name": "pt2", 00:20:18.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:18.716 "is_configured": true, 00:20:18.716 "data_offset": 2048, 00:20:18.716 "data_size": 63488 00:20:18.716 }, 00:20:18.716 { 00:20:18.716 "name": "pt3", 00:20:18.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:18.716 "is_configured": true, 00:20:18.716 "data_offset": 2048, 00:20:18.716 "data_size": 63488 00:20:18.716 } 00:20:18.716 ] 00:20:18.716 } 00:20:18.716 } 00:20:18.716 }' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:18.716 pt2 00:20:18.716 pt3' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.716 [2024-11-08 17:07:55.371093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c21c5009-6346-46b7-a3dd-7ee09adc92eb '!=' c21c5009-6346-46b7-a3dd-7ee09adc92eb ']' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65630 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65630 ']' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65630 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65630 00:20:18.716 killing process with pid 65630 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65630' 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65630 00:20:18.716 [2024-11-08 17:07:55.425095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:18.716 17:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65630 00:20:18.716 [2024-11-08 17:07:55.425208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.716 [2024-11-08 17:07:55.425284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.716 [2024-11-08 17:07:55.425297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:18.974 [2024-11-08 17:07:55.626361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:19.908 17:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:19.908 00:20:19.908 real 0m4.227s 00:20:19.908 user 0m5.892s 00:20:19.908 sys 0m0.708s 00:20:19.908 ************************************ 00:20:19.908 END TEST raid_superblock_test 00:20:19.908 ************************************ 00:20:19.908 17:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:19.908 17:07:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.908 17:07:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:20:19.908 17:07:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:19.908 17:07:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:19.908 17:07:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.166 ************************************ 00:20:20.166 START TEST raid_read_error_test 00:20:20.166 ************************************ 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OzYOA6VljZ 00:20:20.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65872 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65872 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65872 ']' 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.166 17:07:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:20.166 [2024-11-08 17:07:56.718365] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:20.166 [2024-11-08 17:07:56.718524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65872 ] 00:20:20.424 [2024-11-08 17:07:56.884491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.424 [2024-11-08 17:07:57.010230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.682 [2024-11-08 17:07:57.161110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.682 [2024-11-08 17:07:57.161170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.968 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.968 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:20:20.968 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:20.968 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:20.968 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.968 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.226 BaseBdev1_malloc 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.226 true 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.226 [2024-11-08 17:07:57.708426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:21.226 [2024-11-08 17:07:57.708609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.226 [2024-11-08 17:07:57.708640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:21.226 [2024-11-08 17:07:57.708651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.226 [2024-11-08 17:07:57.711009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.226 [2024-11-08 17:07:57.711048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:21.226 BaseBdev1 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.226 BaseBdev2_malloc 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:21.226 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.227 true 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.227 [2024-11-08 17:07:57.754716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:21.227 [2024-11-08 17:07:57.754903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.227 [2024-11-08 17:07:57.754949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:21.227 [2024-11-08 17:07:57.755325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.227 [2024-11-08 17:07:57.757794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.227 BaseBdev2 00:20:21.227 [2024-11-08 17:07:57.757935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.227 BaseBdev3_malloc 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.227 true 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.227 [2024-11-08 17:07:57.813820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:21.227 [2024-11-08 17:07:57.813883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.227 [2024-11-08 17:07:57.813902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:21.227 [2024-11-08 17:07:57.813914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.227 [2024-11-08 17:07:57.816228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.227 [2024-11-08 17:07:57.816268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:21.227 BaseBdev3 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.227 [2024-11-08 17:07:57.821905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.227 [2024-11-08 17:07:57.823994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.227 [2024-11-08 17:07:57.824172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.227 [2024-11-08 17:07:57.824414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:21.227 [2024-11-08 17:07:57.824487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:21.227 [2024-11-08 17:07:57.824811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:21.227 [2024-11-08 17:07:57.824999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:21.227 [2024-11-08 17:07:57.825110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:21.227 [2024-11-08 17:07:57.825305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.227 "name": "raid_bdev1", 00:20:21.227 "uuid": "283cce4a-1ab7-45f3-a032-bd48b88ad253", 00:20:21.227 "strip_size_kb": 64, 00:20:21.227 "state": "online", 00:20:21.227 "raid_level": "concat", 00:20:21.227 "superblock": true, 00:20:21.227 "num_base_bdevs": 3, 00:20:21.227 "num_base_bdevs_discovered": 3, 00:20:21.227 "num_base_bdevs_operational": 3, 00:20:21.227 "base_bdevs_list": [ 00:20:21.227 { 00:20:21.227 "name": "BaseBdev1", 00:20:21.227 "uuid": "0eb62244-0ff2-53e9-b95b-6ea1f08e8545", 00:20:21.227 "is_configured": true, 00:20:21.227 "data_offset": 2048, 00:20:21.227 "data_size": 63488 00:20:21.227 }, 00:20:21.227 { 00:20:21.227 "name": "BaseBdev2", 00:20:21.227 "uuid": "b3fae864-7411-532f-bf85-239b8ac0dc67", 00:20:21.227 "is_configured": true, 00:20:21.227 "data_offset": 2048, 00:20:21.227 "data_size": 63488 00:20:21.227 }, 00:20:21.227 { 00:20:21.227 "name": "BaseBdev3", 00:20:21.227 "uuid": "f7cad97c-a3cf-58e4-b322-c33e282a1a5b", 00:20:21.227 "is_configured": true, 00:20:21.227 "data_offset": 2048, 00:20:21.227 "data_size": 63488 00:20:21.227 } 00:20:21.227 ] 00:20:21.227 }' 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.227 17:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.486 17:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:21.486 17:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:21.744 [2024-11-08 17:07:58.291083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.677 "name": "raid_bdev1", 00:20:22.677 "uuid": "283cce4a-1ab7-45f3-a032-bd48b88ad253", 00:20:22.677 "strip_size_kb": 64, 00:20:22.677 "state": "online", 00:20:22.677 "raid_level": "concat", 00:20:22.677 "superblock": true, 00:20:22.677 "num_base_bdevs": 3, 00:20:22.677 "num_base_bdevs_discovered": 3, 00:20:22.677 "num_base_bdevs_operational": 3, 00:20:22.677 "base_bdevs_list": [ 00:20:22.677 { 00:20:22.677 "name": "BaseBdev1", 00:20:22.677 "uuid": "0eb62244-0ff2-53e9-b95b-6ea1f08e8545", 00:20:22.677 "is_configured": true, 00:20:22.677 "data_offset": 2048, 00:20:22.677 "data_size": 63488 00:20:22.677 }, 00:20:22.677 { 00:20:22.677 "name": "BaseBdev2", 00:20:22.677 "uuid": "b3fae864-7411-532f-bf85-239b8ac0dc67", 00:20:22.677 "is_configured": true, 00:20:22.677 "data_offset": 2048, 00:20:22.677 "data_size": 63488 00:20:22.677 }, 00:20:22.677 { 00:20:22.677 "name": "BaseBdev3", 00:20:22.677 "uuid": "f7cad97c-a3cf-58e4-b322-c33e282a1a5b", 00:20:22.677 "is_configured": true, 00:20:22.677 "data_offset": 2048, 00:20:22.677 "data_size": 63488 00:20:22.677 } 00:20:22.677 ] 00:20:22.677 }' 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.677 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.935 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.935 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.935 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.935 [2024-11-08 17:07:59.533237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.935 [2024-11-08 17:07:59.533268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.936 [2024-11-08 17:07:59.536347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.936 [2024-11-08 17:07:59.536512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.936 [2024-11-08 17:07:59.536564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.936 [2024-11-08 17:07:59.536578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:22.936 { 00:20:22.936 "results": [ 00:20:22.936 { 00:20:22.936 "job": "raid_bdev1", 00:20:22.936 "core_mask": "0x1", 00:20:22.936 "workload": "randrw", 00:20:22.936 "percentage": 50, 00:20:22.936 "status": "finished", 00:20:22.936 "queue_depth": 1, 00:20:22.936 "io_size": 131072, 00:20:22.936 "runtime": 1.240128, 00:20:22.936 "iops": 13898.565309387419, 00:20:22.936 "mibps": 1737.3206636734274, 00:20:22.936 "io_failed": 1, 00:20:22.936 "io_timeout": 0, 00:20:22.936 "avg_latency_us": 99.10400899674671, 00:20:22.936 "min_latency_us": 33.28, 00:20:22.936 "max_latency_us": 1714.0184615384615 00:20:22.936 } 00:20:22.936 ], 00:20:22.936 "core_count": 1 00:20:22.936 } 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65872 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65872 ']' 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65872 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65872 00:20:22.936 killing process with pid 65872 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65872' 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65872 00:20:22.936 17:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65872 00:20:22.936 [2024-11-08 17:07:59.566787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.192 [2024-11-08 17:07:59.722844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OzYOA6VljZ 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:20:24.125 00:20:24.125 real 0m3.897s 00:20:24.125 user 0m4.733s 00:20:24.125 sys 0m0.450s 00:20:24.125 ************************************ 00:20:24.125 END TEST raid_read_error_test 00:20:24.125 ************************************ 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:24.125 17:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.125 17:08:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:20:24.126 17:08:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:24.126 17:08:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:24.126 17:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:24.126 ************************************ 00:20:24.126 START TEST raid_write_error_test 00:20:24.126 ************************************ 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.61M6DWeCpR 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66012 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66012 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 66012 ']' 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:24.126 17:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.126 [2024-11-08 17:08:00.679838] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:24.126 [2024-11-08 17:08:00.679976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66012 ] 00:20:24.384 [2024-11-08 17:08:00.838127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.384 [2024-11-08 17:08:00.956538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.642 [2024-11-08 17:08:01.103105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.642 [2024-11-08 17:08:01.103176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.900 BaseBdev1_malloc 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.900 true 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.900 [2024-11-08 17:08:01.591326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:24.900 [2024-11-08 17:08:01.591498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.900 [2024-11-08 17:08:01.591529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:24.900 [2024-11-08 17:08:01.591542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.900 [2024-11-08 17:08:01.593907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.900 [2024-11-08 17:08:01.593944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:24.900 BaseBdev1 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.900 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.158 BaseBdev2_malloc 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.158 true 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.158 [2024-11-08 17:08:01.637779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:25.158 [2024-11-08 17:08:01.637925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.158 [2024-11-08 17:08:01.637947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:25.158 [2024-11-08 17:08:01.637958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.158 [2024-11-08 17:08:01.640232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.158 [2024-11-08 17:08:01.640269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:25.158 BaseBdev2 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.158 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.159 BaseBdev3_malloc 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.159 true 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.159 [2024-11-08 17:08:01.704202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:25.159 [2024-11-08 17:08:01.704365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.159 [2024-11-08 17:08:01.704436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:25.159 [2024-11-08 17:08:01.704544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.159 [2024-11-08 17:08:01.706969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.159 [2024-11-08 17:08:01.707082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:25.159 BaseBdev3 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.159 [2024-11-08 17:08:01.712289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.159 [2024-11-08 17:08:01.714366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.159 [2024-11-08 17:08:01.714537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:25.159 [2024-11-08 17:08:01.714782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:25.159 [2024-11-08 17:08:01.714796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:25.159 [2024-11-08 17:08:01.715088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:25.159 [2024-11-08 17:08:01.715242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:25.159 [2024-11-08 17:08:01.715255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:25.159 [2024-11-08 17:08:01.715408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.159 "name": "raid_bdev1", 00:20:25.159 "uuid": "9dfeb8cd-f235-4437-9765-ff63b2904108", 00:20:25.159 "strip_size_kb": 64, 00:20:25.159 "state": "online", 00:20:25.159 "raid_level": "concat", 00:20:25.159 "superblock": true, 00:20:25.159 "num_base_bdevs": 3, 00:20:25.159 "num_base_bdevs_discovered": 3, 00:20:25.159 "num_base_bdevs_operational": 3, 00:20:25.159 "base_bdevs_list": [ 00:20:25.159 { 00:20:25.159 "name": "BaseBdev1", 00:20:25.159 "uuid": "5ff1fe9c-525b-564a-8279-798d39cf0558", 00:20:25.159 "is_configured": true, 00:20:25.159 "data_offset": 2048, 00:20:25.159 "data_size": 63488 00:20:25.159 }, 00:20:25.159 { 00:20:25.159 "name": "BaseBdev2", 00:20:25.159 "uuid": "244d420f-a612-5819-9872-026db86b05c9", 00:20:25.159 "is_configured": true, 00:20:25.159 "data_offset": 2048, 00:20:25.159 "data_size": 63488 00:20:25.159 }, 00:20:25.159 { 00:20:25.159 "name": "BaseBdev3", 00:20:25.159 "uuid": "e8a1d097-e564-5d4a-804b-650363cf09fe", 00:20:25.159 "is_configured": true, 00:20:25.159 "data_offset": 2048, 00:20:25.159 "data_size": 63488 00:20:25.159 } 00:20:25.159 ] 00:20:25.159 }' 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.159 17:08:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.417 17:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:25.417 17:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:25.417 [2024-11-08 17:08:02.129447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.348 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.605 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.605 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.605 "name": "raid_bdev1", 00:20:26.605 "uuid": "9dfeb8cd-f235-4437-9765-ff63b2904108", 00:20:26.605 "strip_size_kb": 64, 00:20:26.605 "state": "online", 00:20:26.605 "raid_level": "concat", 00:20:26.605 "superblock": true, 00:20:26.605 "num_base_bdevs": 3, 00:20:26.605 "num_base_bdevs_discovered": 3, 00:20:26.605 "num_base_bdevs_operational": 3, 00:20:26.605 "base_bdevs_list": [ 00:20:26.605 { 00:20:26.605 "name": "BaseBdev1", 00:20:26.605 "uuid": "5ff1fe9c-525b-564a-8279-798d39cf0558", 00:20:26.605 "is_configured": true, 00:20:26.605 "data_offset": 2048, 00:20:26.605 "data_size": 63488 00:20:26.605 }, 00:20:26.605 { 00:20:26.605 "name": "BaseBdev2", 00:20:26.605 "uuid": "244d420f-a612-5819-9872-026db86b05c9", 00:20:26.605 "is_configured": true, 00:20:26.605 "data_offset": 2048, 00:20:26.605 "data_size": 63488 00:20:26.605 }, 00:20:26.605 { 00:20:26.605 "name": "BaseBdev3", 00:20:26.605 "uuid": "e8a1d097-e564-5d4a-804b-650363cf09fe", 00:20:26.605 "is_configured": true, 00:20:26.605 "data_offset": 2048, 00:20:26.605 "data_size": 63488 00:20:26.605 } 00:20:26.605 ] 00:20:26.605 }' 00:20:26.605 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.605 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.863 [2024-11-08 17:08:03.375532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.863 [2024-11-08 17:08:03.375672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.863 [2024-11-08 17:08:03.378824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.863 [2024-11-08 17:08:03.378965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.863 [2024-11-08 17:08:03.379033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.863 [2024-11-08 17:08:03.379166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:26.863 { 00:20:26.863 "results": [ 00:20:26.863 { 00:20:26.863 "job": "raid_bdev1", 00:20:26.863 "core_mask": "0x1", 00:20:26.863 "workload": "randrw", 00:20:26.863 "percentage": 50, 00:20:26.863 "status": "finished", 00:20:26.863 "queue_depth": 1, 00:20:26.863 "io_size": 131072, 00:20:26.863 "runtime": 1.244248, 00:20:26.863 "iops": 14013.283525470806, 00:20:26.863 "mibps": 1751.6604406838508, 00:20:26.863 "io_failed": 1, 00:20:26.863 "io_timeout": 0, 00:20:26.863 "avg_latency_us": 98.10914474525876, 00:20:26.863 "min_latency_us": 33.47692307692308, 00:20:26.863 "max_latency_us": 1701.4153846153847 00:20:26.863 } 00:20:26.863 ], 00:20:26.863 "core_count": 1 00:20:26.863 } 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66012 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 66012 ']' 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 66012 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66012 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:26.863 killing process with pid 66012 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66012' 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 66012 00:20:26.863 [2024-11-08 17:08:03.410732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.863 17:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 66012 00:20:26.863 [2024-11-08 17:08:03.560331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.61M6DWeCpR 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:20:27.796 00:20:27.796 real 0m3.745s 00:20:27.796 user 0m4.392s 00:20:27.796 sys 0m0.453s 00:20:27.796 ************************************ 00:20:27.796 END TEST raid_write_error_test 00:20:27.796 ************************************ 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:27.796 17:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.796 17:08:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:27.796 17:08:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:20:27.796 17:08:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:27.796 17:08:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:27.796 17:08:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.796 ************************************ 00:20:27.796 START TEST raid_state_function_test 00:20:27.796 ************************************ 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:27.796 Process raid pid: 66139 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66139 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66139' 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66139 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 66139 ']' 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:27.796 17:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.796 [2024-11-08 17:08:04.487489] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:27.796 [2024-11-08 17:08:04.487619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.053 [2024-11-08 17:08:04.646273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.053 [2024-11-08 17:08:04.762374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.310 [2024-11-08 17:08:04.910817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.310 [2024-11-08 17:08:04.910866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.874 [2024-11-08 17:08:05.353472] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:28.874 [2024-11-08 17:08:05.353525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:28.874 [2024-11-08 17:08:05.353536] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:28.874 [2024-11-08 17:08:05.353546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:28.874 [2024-11-08 17:08:05.353552] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:28.874 [2024-11-08 17:08:05.353561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.874 "name": "Existed_Raid", 00:20:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.874 "strip_size_kb": 0, 00:20:28.874 "state": "configuring", 00:20:28.874 "raid_level": "raid1", 00:20:28.874 "superblock": false, 00:20:28.874 "num_base_bdevs": 3, 00:20:28.874 "num_base_bdevs_discovered": 0, 00:20:28.874 "num_base_bdevs_operational": 3, 00:20:28.874 "base_bdevs_list": [ 00:20:28.874 { 00:20:28.874 "name": "BaseBdev1", 00:20:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.874 "is_configured": false, 00:20:28.874 "data_offset": 0, 00:20:28.874 "data_size": 0 00:20:28.874 }, 00:20:28.874 { 00:20:28.874 "name": "BaseBdev2", 00:20:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.874 "is_configured": false, 00:20:28.874 "data_offset": 0, 00:20:28.874 "data_size": 0 00:20:28.874 }, 00:20:28.874 { 00:20:28.874 "name": "BaseBdev3", 00:20:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.874 "is_configured": false, 00:20:28.874 "data_offset": 0, 00:20:28.874 "data_size": 0 00:20:28.874 } 00:20:28.874 ] 00:20:28.874 }' 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.874 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 [2024-11-08 17:08:05.661509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.132 [2024-11-08 17:08:05.661549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 [2024-11-08 17:08:05.669499] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.132 [2024-11-08 17:08:05.669545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.132 [2024-11-08 17:08:05.669554] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.132 [2024-11-08 17:08:05.669563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.132 [2024-11-08 17:08:05.669570] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:29.132 [2024-11-08 17:08:05.669578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 [2024-11-08 17:08:05.704220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.132 BaseBdev1 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 [ 00:20:29.132 { 00:20:29.132 "name": "BaseBdev1", 00:20:29.132 "aliases": [ 00:20:29.132 "7bfa163b-28db-4e23-a184-beba1fc57472" 00:20:29.132 ], 00:20:29.132 "product_name": "Malloc disk", 00:20:29.132 "block_size": 512, 00:20:29.132 "num_blocks": 65536, 00:20:29.132 "uuid": "7bfa163b-28db-4e23-a184-beba1fc57472", 00:20:29.132 "assigned_rate_limits": { 00:20:29.132 "rw_ios_per_sec": 0, 00:20:29.132 "rw_mbytes_per_sec": 0, 00:20:29.132 "r_mbytes_per_sec": 0, 00:20:29.132 "w_mbytes_per_sec": 0 00:20:29.132 }, 00:20:29.132 "claimed": true, 00:20:29.132 "claim_type": "exclusive_write", 00:20:29.132 "zoned": false, 00:20:29.132 "supported_io_types": { 00:20:29.132 "read": true, 00:20:29.132 "write": true, 00:20:29.132 "unmap": true, 00:20:29.132 "flush": true, 00:20:29.132 "reset": true, 00:20:29.132 "nvme_admin": false, 00:20:29.132 "nvme_io": false, 00:20:29.132 "nvme_io_md": false, 00:20:29.132 "write_zeroes": true, 00:20:29.132 "zcopy": true, 00:20:29.132 "get_zone_info": false, 00:20:29.132 "zone_management": false, 00:20:29.132 "zone_append": false, 00:20:29.132 "compare": false, 00:20:29.132 "compare_and_write": false, 00:20:29.132 "abort": true, 00:20:29.132 "seek_hole": false, 00:20:29.132 "seek_data": false, 00:20:29.132 "copy": true, 00:20:29.132 "nvme_iov_md": false 00:20:29.132 }, 00:20:29.132 "memory_domains": [ 00:20:29.132 { 00:20:29.132 "dma_device_id": "system", 00:20:29.132 "dma_device_type": 1 00:20:29.132 }, 00:20:29.132 { 00:20:29.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.132 "dma_device_type": 2 00:20:29.132 } 00:20:29.132 ], 00:20:29.132 "driver_specific": {} 00:20:29.132 } 00:20:29.132 ] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.132 "name": "Existed_Raid", 00:20:29.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.132 "strip_size_kb": 0, 00:20:29.132 "state": "configuring", 00:20:29.132 "raid_level": "raid1", 00:20:29.132 "superblock": false, 00:20:29.132 "num_base_bdevs": 3, 00:20:29.132 "num_base_bdevs_discovered": 1, 00:20:29.132 "num_base_bdevs_operational": 3, 00:20:29.132 "base_bdevs_list": [ 00:20:29.132 { 00:20:29.132 "name": "BaseBdev1", 00:20:29.132 "uuid": "7bfa163b-28db-4e23-a184-beba1fc57472", 00:20:29.132 "is_configured": true, 00:20:29.132 "data_offset": 0, 00:20:29.132 "data_size": 65536 00:20:29.132 }, 00:20:29.132 { 00:20:29.132 "name": "BaseBdev2", 00:20:29.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.132 "is_configured": false, 00:20:29.132 "data_offset": 0, 00:20:29.132 "data_size": 0 00:20:29.132 }, 00:20:29.132 { 00:20:29.132 "name": "BaseBdev3", 00:20:29.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.132 "is_configured": false, 00:20:29.132 "data_offset": 0, 00:20:29.132 "data_size": 0 00:20:29.132 } 00:20:29.132 ] 00:20:29.132 }' 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.132 17:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.417 [2024-11-08 17:08:06.040345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.417 [2024-11-08 17:08:06.040512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.417 [2024-11-08 17:08:06.048385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.417 [2024-11-08 17:08:06.050410] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.417 [2024-11-08 17:08:06.050530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.417 [2024-11-08 17:08:06.050586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:29.417 [2024-11-08 17:08:06.050613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.417 "name": "Existed_Raid", 00:20:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.417 "strip_size_kb": 0, 00:20:29.417 "state": "configuring", 00:20:29.417 "raid_level": "raid1", 00:20:29.417 "superblock": false, 00:20:29.417 "num_base_bdevs": 3, 00:20:29.417 "num_base_bdevs_discovered": 1, 00:20:29.417 "num_base_bdevs_operational": 3, 00:20:29.417 "base_bdevs_list": [ 00:20:29.417 { 00:20:29.417 "name": "BaseBdev1", 00:20:29.417 "uuid": "7bfa163b-28db-4e23-a184-beba1fc57472", 00:20:29.417 "is_configured": true, 00:20:29.417 "data_offset": 0, 00:20:29.417 "data_size": 65536 00:20:29.417 }, 00:20:29.417 { 00:20:29.417 "name": "BaseBdev2", 00:20:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.417 "is_configured": false, 00:20:29.417 "data_offset": 0, 00:20:29.417 "data_size": 0 00:20:29.417 }, 00:20:29.417 { 00:20:29.417 "name": "BaseBdev3", 00:20:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.417 "is_configured": false, 00:20:29.417 "data_offset": 0, 00:20:29.417 "data_size": 0 00:20:29.417 } 00:20:29.417 ] 00:20:29.417 }' 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.417 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.675 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:29.675 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.675 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.933 [2024-11-08 17:08:06.409038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.933 BaseBdev2 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.933 [ 00:20:29.933 { 00:20:29.933 "name": "BaseBdev2", 00:20:29.933 "aliases": [ 00:20:29.933 "168691a7-6d2e-40bd-9392-86795859051e" 00:20:29.933 ], 00:20:29.933 "product_name": "Malloc disk", 00:20:29.933 "block_size": 512, 00:20:29.933 "num_blocks": 65536, 00:20:29.933 "uuid": "168691a7-6d2e-40bd-9392-86795859051e", 00:20:29.933 "assigned_rate_limits": { 00:20:29.933 "rw_ios_per_sec": 0, 00:20:29.933 "rw_mbytes_per_sec": 0, 00:20:29.933 "r_mbytes_per_sec": 0, 00:20:29.933 "w_mbytes_per_sec": 0 00:20:29.933 }, 00:20:29.933 "claimed": true, 00:20:29.933 "claim_type": "exclusive_write", 00:20:29.933 "zoned": false, 00:20:29.933 "supported_io_types": { 00:20:29.933 "read": true, 00:20:29.933 "write": true, 00:20:29.933 "unmap": true, 00:20:29.933 "flush": true, 00:20:29.933 "reset": true, 00:20:29.933 "nvme_admin": false, 00:20:29.933 "nvme_io": false, 00:20:29.933 "nvme_io_md": false, 00:20:29.933 "write_zeroes": true, 00:20:29.933 "zcopy": true, 00:20:29.933 "get_zone_info": false, 00:20:29.933 "zone_management": false, 00:20:29.933 "zone_append": false, 00:20:29.933 "compare": false, 00:20:29.933 "compare_and_write": false, 00:20:29.933 "abort": true, 00:20:29.933 "seek_hole": false, 00:20:29.933 "seek_data": false, 00:20:29.933 "copy": true, 00:20:29.933 "nvme_iov_md": false 00:20:29.933 }, 00:20:29.933 "memory_domains": [ 00:20:29.933 { 00:20:29.933 "dma_device_id": "system", 00:20:29.933 "dma_device_type": 1 00:20:29.933 }, 00:20:29.933 { 00:20:29.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.933 "dma_device_type": 2 00:20:29.933 } 00:20:29.933 ], 00:20:29.933 "driver_specific": {} 00:20:29.933 } 00:20:29.933 ] 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.933 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.934 "name": "Existed_Raid", 00:20:29.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.934 "strip_size_kb": 0, 00:20:29.934 "state": "configuring", 00:20:29.934 "raid_level": "raid1", 00:20:29.934 "superblock": false, 00:20:29.934 "num_base_bdevs": 3, 00:20:29.934 "num_base_bdevs_discovered": 2, 00:20:29.934 "num_base_bdevs_operational": 3, 00:20:29.934 "base_bdevs_list": [ 00:20:29.934 { 00:20:29.934 "name": "BaseBdev1", 00:20:29.934 "uuid": "7bfa163b-28db-4e23-a184-beba1fc57472", 00:20:29.934 "is_configured": true, 00:20:29.934 "data_offset": 0, 00:20:29.934 "data_size": 65536 00:20:29.934 }, 00:20:29.934 { 00:20:29.934 "name": "BaseBdev2", 00:20:29.934 "uuid": "168691a7-6d2e-40bd-9392-86795859051e", 00:20:29.934 "is_configured": true, 00:20:29.934 "data_offset": 0, 00:20:29.934 "data_size": 65536 00:20:29.934 }, 00:20:29.934 { 00:20:29.934 "name": "BaseBdev3", 00:20:29.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.934 "is_configured": false, 00:20:29.934 "data_offset": 0, 00:20:29.934 "data_size": 0 00:20:29.934 } 00:20:29.934 ] 00:20:29.934 }' 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.934 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 [2024-11-08 17:08:06.857279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:30.193 [2024-11-08 17:08:06.857331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:30.193 [2024-11-08 17:08:06.857344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:30.193 [2024-11-08 17:08:06.857624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:30.193 [2024-11-08 17:08:06.857832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:30.193 [2024-11-08 17:08:06.857844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:30.193 [2024-11-08 17:08:06.858110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.193 BaseBdev3 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 [ 00:20:30.193 { 00:20:30.193 "name": "BaseBdev3", 00:20:30.193 "aliases": [ 00:20:30.193 "f8024a60-6ab1-4ebd-b0c1-23bb9ba4593d" 00:20:30.193 ], 00:20:30.193 "product_name": "Malloc disk", 00:20:30.193 "block_size": 512, 00:20:30.193 "num_blocks": 65536, 00:20:30.193 "uuid": "f8024a60-6ab1-4ebd-b0c1-23bb9ba4593d", 00:20:30.193 "assigned_rate_limits": { 00:20:30.193 "rw_ios_per_sec": 0, 00:20:30.193 "rw_mbytes_per_sec": 0, 00:20:30.193 "r_mbytes_per_sec": 0, 00:20:30.193 "w_mbytes_per_sec": 0 00:20:30.193 }, 00:20:30.193 "claimed": true, 00:20:30.193 "claim_type": "exclusive_write", 00:20:30.193 "zoned": false, 00:20:30.193 "supported_io_types": { 00:20:30.193 "read": true, 00:20:30.193 "write": true, 00:20:30.193 "unmap": true, 00:20:30.193 "flush": true, 00:20:30.193 "reset": true, 00:20:30.193 "nvme_admin": false, 00:20:30.193 "nvme_io": false, 00:20:30.193 "nvme_io_md": false, 00:20:30.193 "write_zeroes": true, 00:20:30.193 "zcopy": true, 00:20:30.193 "get_zone_info": false, 00:20:30.193 "zone_management": false, 00:20:30.193 "zone_append": false, 00:20:30.193 "compare": false, 00:20:30.193 "compare_and_write": false, 00:20:30.193 "abort": true, 00:20:30.193 "seek_hole": false, 00:20:30.193 "seek_data": false, 00:20:30.193 "copy": true, 00:20:30.193 "nvme_iov_md": false 00:20:30.193 }, 00:20:30.193 "memory_domains": [ 00:20:30.193 { 00:20:30.193 "dma_device_id": "system", 00:20:30.193 "dma_device_type": 1 00:20:30.193 }, 00:20:30.193 { 00:20:30.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.193 "dma_device_type": 2 00:20:30.193 } 00:20:30.193 ], 00:20:30.193 "driver_specific": {} 00:20:30.193 } 00:20:30.193 ] 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.453 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.453 "name": "Existed_Raid", 00:20:30.453 "uuid": "d0bee10f-a46c-48f6-984a-811e02d0b7a7", 00:20:30.453 "strip_size_kb": 0, 00:20:30.453 "state": "online", 00:20:30.453 "raid_level": "raid1", 00:20:30.453 "superblock": false, 00:20:30.453 "num_base_bdevs": 3, 00:20:30.453 "num_base_bdevs_discovered": 3, 00:20:30.453 "num_base_bdevs_operational": 3, 00:20:30.453 "base_bdevs_list": [ 00:20:30.453 { 00:20:30.453 "name": "BaseBdev1", 00:20:30.453 "uuid": "7bfa163b-28db-4e23-a184-beba1fc57472", 00:20:30.453 "is_configured": true, 00:20:30.453 "data_offset": 0, 00:20:30.453 "data_size": 65536 00:20:30.453 }, 00:20:30.453 { 00:20:30.453 "name": "BaseBdev2", 00:20:30.453 "uuid": "168691a7-6d2e-40bd-9392-86795859051e", 00:20:30.453 "is_configured": true, 00:20:30.453 "data_offset": 0, 00:20:30.453 "data_size": 65536 00:20:30.453 }, 00:20:30.453 { 00:20:30.453 "name": "BaseBdev3", 00:20:30.453 "uuid": "f8024a60-6ab1-4ebd-b0c1-23bb9ba4593d", 00:20:30.453 "is_configured": true, 00:20:30.453 "data_offset": 0, 00:20:30.453 "data_size": 65536 00:20:30.453 } 00:20:30.453 ] 00:20:30.453 }' 00:20:30.453 17:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.453 17:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.711 [2024-11-08 17:08:07.237816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:30.711 "name": "Existed_Raid", 00:20:30.711 "aliases": [ 00:20:30.711 "d0bee10f-a46c-48f6-984a-811e02d0b7a7" 00:20:30.711 ], 00:20:30.711 "product_name": "Raid Volume", 00:20:30.711 "block_size": 512, 00:20:30.711 "num_blocks": 65536, 00:20:30.711 "uuid": "d0bee10f-a46c-48f6-984a-811e02d0b7a7", 00:20:30.711 "assigned_rate_limits": { 00:20:30.711 "rw_ios_per_sec": 0, 00:20:30.711 "rw_mbytes_per_sec": 0, 00:20:30.711 "r_mbytes_per_sec": 0, 00:20:30.711 "w_mbytes_per_sec": 0 00:20:30.711 }, 00:20:30.711 "claimed": false, 00:20:30.711 "zoned": false, 00:20:30.711 "supported_io_types": { 00:20:30.711 "read": true, 00:20:30.711 "write": true, 00:20:30.711 "unmap": false, 00:20:30.711 "flush": false, 00:20:30.711 "reset": true, 00:20:30.711 "nvme_admin": false, 00:20:30.711 "nvme_io": false, 00:20:30.711 "nvme_io_md": false, 00:20:30.711 "write_zeroes": true, 00:20:30.711 "zcopy": false, 00:20:30.711 "get_zone_info": false, 00:20:30.711 "zone_management": false, 00:20:30.711 "zone_append": false, 00:20:30.711 "compare": false, 00:20:30.711 "compare_and_write": false, 00:20:30.711 "abort": false, 00:20:30.711 "seek_hole": false, 00:20:30.711 "seek_data": false, 00:20:30.711 "copy": false, 00:20:30.711 "nvme_iov_md": false 00:20:30.711 }, 00:20:30.711 "memory_domains": [ 00:20:30.711 { 00:20:30.711 "dma_device_id": "system", 00:20:30.711 "dma_device_type": 1 00:20:30.711 }, 00:20:30.711 { 00:20:30.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.711 "dma_device_type": 2 00:20:30.711 }, 00:20:30.711 { 00:20:30.711 "dma_device_id": "system", 00:20:30.711 "dma_device_type": 1 00:20:30.711 }, 00:20:30.711 { 00:20:30.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.711 "dma_device_type": 2 00:20:30.711 }, 00:20:30.711 { 00:20:30.711 "dma_device_id": "system", 00:20:30.711 "dma_device_type": 1 00:20:30.711 }, 00:20:30.711 { 00:20:30.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.711 "dma_device_type": 2 00:20:30.711 } 00:20:30.711 ], 00:20:30.711 "driver_specific": { 00:20:30.711 "raid": { 00:20:30.711 "uuid": "d0bee10f-a46c-48f6-984a-811e02d0b7a7", 00:20:30.711 "strip_size_kb": 0, 00:20:30.711 "state": "online", 00:20:30.711 "raid_level": "raid1", 00:20:30.711 "superblock": false, 00:20:30.711 "num_base_bdevs": 3, 00:20:30.711 "num_base_bdevs_discovered": 3, 00:20:30.711 "num_base_bdevs_operational": 3, 00:20:30.711 "base_bdevs_list": [ 00:20:30.711 { 00:20:30.711 "name": "BaseBdev1", 00:20:30.711 "uuid": "7bfa163b-28db-4e23-a184-beba1fc57472", 00:20:30.711 "is_configured": true, 00:20:30.711 "data_offset": 0, 00:20:30.711 "data_size": 65536 00:20:30.711 }, 00:20:30.711 { 00:20:30.711 "name": "BaseBdev2", 00:20:30.711 "uuid": "168691a7-6d2e-40bd-9392-86795859051e", 00:20:30.711 "is_configured": true, 00:20:30.711 "data_offset": 0, 00:20:30.711 "data_size": 65536 00:20:30.711 }, 00:20:30.711 { 00:20:30.711 "name": "BaseBdev3", 00:20:30.711 "uuid": "f8024a60-6ab1-4ebd-b0c1-23bb9ba4593d", 00:20:30.711 "is_configured": true, 00:20:30.711 "data_offset": 0, 00:20:30.711 "data_size": 65536 00:20:30.711 } 00:20:30.711 ] 00:20:30.711 } 00:20:30.711 } 00:20:30.711 }' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:30.711 BaseBdev2 00:20:30.711 BaseBdev3' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.711 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:30.712 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:30.712 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:30.712 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:30.712 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.712 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.712 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:30.712 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.969 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:30.969 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:30.969 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:30.969 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.969 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.970 [2024-11-08 17:08:07.437543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.970 "name": "Existed_Raid", 00:20:30.970 "uuid": "d0bee10f-a46c-48f6-984a-811e02d0b7a7", 00:20:30.970 "strip_size_kb": 0, 00:20:30.970 "state": "online", 00:20:30.970 "raid_level": "raid1", 00:20:30.970 "superblock": false, 00:20:30.970 "num_base_bdevs": 3, 00:20:30.970 "num_base_bdevs_discovered": 2, 00:20:30.970 "num_base_bdevs_operational": 2, 00:20:30.970 "base_bdevs_list": [ 00:20:30.970 { 00:20:30.970 "name": null, 00:20:30.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.970 "is_configured": false, 00:20:30.970 "data_offset": 0, 00:20:30.970 "data_size": 65536 00:20:30.970 }, 00:20:30.970 { 00:20:30.970 "name": "BaseBdev2", 00:20:30.970 "uuid": "168691a7-6d2e-40bd-9392-86795859051e", 00:20:30.970 "is_configured": true, 00:20:30.970 "data_offset": 0, 00:20:30.970 "data_size": 65536 00:20:30.970 }, 00:20:30.970 { 00:20:30.970 "name": "BaseBdev3", 00:20:30.970 "uuid": "f8024a60-6ab1-4ebd-b0c1-23bb9ba4593d", 00:20:30.970 "is_configured": true, 00:20:30.970 "data_offset": 0, 00:20:30.970 "data_size": 65536 00:20:30.970 } 00:20:30.970 ] 00:20:30.970 }' 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.970 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.229 [2024-11-08 17:08:07.875263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:31.229 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:31.489 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.489 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:31.489 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.489 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:31.490 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.490 17:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:31.490 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 [2024-11-08 17:08:07.981937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:31.490 [2024-11-08 17:08:07.982133] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.490 [2024-11-08 17:08:08.045299] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.490 [2024-11-08 17:08:08.045526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.490 [2024-11-08 17:08:08.045598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 BaseBdev2 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 [ 00:20:31.490 { 00:20:31.490 "name": "BaseBdev2", 00:20:31.490 "aliases": [ 00:20:31.490 "210bb9d3-3210-4943-a31f-92b17a8711b9" 00:20:31.490 ], 00:20:31.490 "product_name": "Malloc disk", 00:20:31.490 "block_size": 512, 00:20:31.490 "num_blocks": 65536, 00:20:31.490 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:31.490 "assigned_rate_limits": { 00:20:31.490 "rw_ios_per_sec": 0, 00:20:31.490 "rw_mbytes_per_sec": 0, 00:20:31.490 "r_mbytes_per_sec": 0, 00:20:31.490 "w_mbytes_per_sec": 0 00:20:31.490 }, 00:20:31.490 "claimed": false, 00:20:31.490 "zoned": false, 00:20:31.490 "supported_io_types": { 00:20:31.490 "read": true, 00:20:31.490 "write": true, 00:20:31.490 "unmap": true, 00:20:31.490 "flush": true, 00:20:31.490 "reset": true, 00:20:31.490 "nvme_admin": false, 00:20:31.490 "nvme_io": false, 00:20:31.490 "nvme_io_md": false, 00:20:31.490 "write_zeroes": true, 00:20:31.490 "zcopy": true, 00:20:31.490 "get_zone_info": false, 00:20:31.490 "zone_management": false, 00:20:31.490 "zone_append": false, 00:20:31.490 "compare": false, 00:20:31.490 "compare_and_write": false, 00:20:31.490 "abort": true, 00:20:31.490 "seek_hole": false, 00:20:31.490 "seek_data": false, 00:20:31.490 "copy": true, 00:20:31.490 "nvme_iov_md": false 00:20:31.490 }, 00:20:31.490 "memory_domains": [ 00:20:31.490 { 00:20:31.490 "dma_device_id": "system", 00:20:31.490 "dma_device_type": 1 00:20:31.490 }, 00:20:31.490 { 00:20:31.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.490 "dma_device_type": 2 00:20:31.490 } 00:20:31.490 ], 00:20:31.490 "driver_specific": {} 00:20:31.490 } 00:20:31.490 ] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 BaseBdev3 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 [ 00:20:31.490 { 00:20:31.490 "name": "BaseBdev3", 00:20:31.490 "aliases": [ 00:20:31.490 "c19c833a-26d1-4f00-8826-9d6e64ffdc78" 00:20:31.490 ], 00:20:31.490 "product_name": "Malloc disk", 00:20:31.490 "block_size": 512, 00:20:31.490 "num_blocks": 65536, 00:20:31.490 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:31.490 "assigned_rate_limits": { 00:20:31.490 "rw_ios_per_sec": 0, 00:20:31.490 "rw_mbytes_per_sec": 0, 00:20:31.490 "r_mbytes_per_sec": 0, 00:20:31.490 "w_mbytes_per_sec": 0 00:20:31.490 }, 00:20:31.490 "claimed": false, 00:20:31.490 "zoned": false, 00:20:31.490 "supported_io_types": { 00:20:31.490 "read": true, 00:20:31.490 "write": true, 00:20:31.490 "unmap": true, 00:20:31.490 "flush": true, 00:20:31.490 "reset": true, 00:20:31.490 "nvme_admin": false, 00:20:31.490 "nvme_io": false, 00:20:31.490 "nvme_io_md": false, 00:20:31.490 "write_zeroes": true, 00:20:31.490 "zcopy": true, 00:20:31.490 "get_zone_info": false, 00:20:31.490 "zone_management": false, 00:20:31.490 "zone_append": false, 00:20:31.490 "compare": false, 00:20:31.490 "compare_and_write": false, 00:20:31.490 "abort": true, 00:20:31.490 "seek_hole": false, 00:20:31.490 "seek_data": false, 00:20:31.490 "copy": true, 00:20:31.490 "nvme_iov_md": false 00:20:31.490 }, 00:20:31.490 "memory_domains": [ 00:20:31.490 { 00:20:31.490 "dma_device_id": "system", 00:20:31.490 "dma_device_type": 1 00:20:31.490 }, 00:20:31.490 { 00:20:31.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.490 "dma_device_type": 2 00:20:31.490 } 00:20:31.490 ], 00:20:31.490 "driver_specific": {} 00:20:31.490 } 00:20:31.490 ] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.490 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.490 [2024-11-08 17:08:08.194123] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.491 [2024-11-08 17:08:08.194275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.491 [2024-11-08 17:08:08.194348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.491 [2024-11-08 17:08:08.196361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.491 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.748 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.748 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.748 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.748 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.748 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.748 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.748 "name": "Existed_Raid", 00:20:31.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.748 "strip_size_kb": 0, 00:20:31.748 "state": "configuring", 00:20:31.748 "raid_level": "raid1", 00:20:31.748 "superblock": false, 00:20:31.748 "num_base_bdevs": 3, 00:20:31.748 "num_base_bdevs_discovered": 2, 00:20:31.748 "num_base_bdevs_operational": 3, 00:20:31.748 "base_bdevs_list": [ 00:20:31.748 { 00:20:31.748 "name": "BaseBdev1", 00:20:31.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.748 "is_configured": false, 00:20:31.748 "data_offset": 0, 00:20:31.748 "data_size": 0 00:20:31.748 }, 00:20:31.748 { 00:20:31.749 "name": "BaseBdev2", 00:20:31.749 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:31.749 "is_configured": true, 00:20:31.749 "data_offset": 0, 00:20:31.749 "data_size": 65536 00:20:31.749 }, 00:20:31.749 { 00:20:31.749 "name": "BaseBdev3", 00:20:31.749 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:31.749 "is_configured": true, 00:20:31.749 "data_offset": 0, 00:20:31.749 "data_size": 65536 00:20:31.749 } 00:20:31.749 ] 00:20:31.749 }' 00:20:31.749 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.749 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.005 [2024-11-08 17:08:08.518227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.005 "name": "Existed_Raid", 00:20:32.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.005 "strip_size_kb": 0, 00:20:32.005 "state": "configuring", 00:20:32.005 "raid_level": "raid1", 00:20:32.005 "superblock": false, 00:20:32.005 "num_base_bdevs": 3, 00:20:32.005 "num_base_bdevs_discovered": 1, 00:20:32.005 "num_base_bdevs_operational": 3, 00:20:32.005 "base_bdevs_list": [ 00:20:32.005 { 00:20:32.005 "name": "BaseBdev1", 00:20:32.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.005 "is_configured": false, 00:20:32.005 "data_offset": 0, 00:20:32.005 "data_size": 0 00:20:32.005 }, 00:20:32.005 { 00:20:32.005 "name": null, 00:20:32.005 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:32.005 "is_configured": false, 00:20:32.005 "data_offset": 0, 00:20:32.005 "data_size": 65536 00:20:32.005 }, 00:20:32.005 { 00:20:32.005 "name": "BaseBdev3", 00:20:32.005 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:32.005 "is_configured": true, 00:20:32.005 "data_offset": 0, 00:20:32.005 "data_size": 65536 00:20:32.005 } 00:20:32.005 ] 00:20:32.005 }' 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.005 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.262 [2024-11-08 17:08:08.947706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.262 BaseBdev1 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.262 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.262 [ 00:20:32.262 { 00:20:32.262 "name": "BaseBdev1", 00:20:32.262 "aliases": [ 00:20:32.262 "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41" 00:20:32.262 ], 00:20:32.262 "product_name": "Malloc disk", 00:20:32.262 "block_size": 512, 00:20:32.262 "num_blocks": 65536, 00:20:32.262 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:32.262 "assigned_rate_limits": { 00:20:32.262 "rw_ios_per_sec": 0, 00:20:32.262 "rw_mbytes_per_sec": 0, 00:20:32.262 "r_mbytes_per_sec": 0, 00:20:32.262 "w_mbytes_per_sec": 0 00:20:32.262 }, 00:20:32.263 "claimed": true, 00:20:32.263 "claim_type": "exclusive_write", 00:20:32.263 "zoned": false, 00:20:32.263 "supported_io_types": { 00:20:32.263 "read": true, 00:20:32.263 "write": true, 00:20:32.263 "unmap": true, 00:20:32.263 "flush": true, 00:20:32.263 "reset": true, 00:20:32.263 "nvme_admin": false, 00:20:32.263 "nvme_io": false, 00:20:32.263 "nvme_io_md": false, 00:20:32.263 "write_zeroes": true, 00:20:32.263 "zcopy": true, 00:20:32.263 "get_zone_info": false, 00:20:32.263 "zone_management": false, 00:20:32.263 "zone_append": false, 00:20:32.263 "compare": false, 00:20:32.263 "compare_and_write": false, 00:20:32.263 "abort": true, 00:20:32.263 "seek_hole": false, 00:20:32.263 "seek_data": false, 00:20:32.263 "copy": true, 00:20:32.263 "nvme_iov_md": false 00:20:32.263 }, 00:20:32.263 "memory_domains": [ 00:20:32.263 { 00:20:32.263 "dma_device_id": "system", 00:20:32.263 "dma_device_type": 1 00:20:32.263 }, 00:20:32.263 { 00:20:32.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.263 "dma_device_type": 2 00:20:32.263 } 00:20:32.263 ], 00:20:32.263 "driver_specific": {} 00:20:32.263 } 00:20:32.263 ] 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.263 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.521 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.521 17:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.521 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.521 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.521 17:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.521 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.521 "name": "Existed_Raid", 00:20:32.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.521 "strip_size_kb": 0, 00:20:32.521 "state": "configuring", 00:20:32.521 "raid_level": "raid1", 00:20:32.521 "superblock": false, 00:20:32.521 "num_base_bdevs": 3, 00:20:32.521 "num_base_bdevs_discovered": 2, 00:20:32.521 "num_base_bdevs_operational": 3, 00:20:32.521 "base_bdevs_list": [ 00:20:32.521 { 00:20:32.521 "name": "BaseBdev1", 00:20:32.521 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:32.521 "is_configured": true, 00:20:32.521 "data_offset": 0, 00:20:32.521 "data_size": 65536 00:20:32.521 }, 00:20:32.521 { 00:20:32.521 "name": null, 00:20:32.521 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:32.521 "is_configured": false, 00:20:32.521 "data_offset": 0, 00:20:32.521 "data_size": 65536 00:20:32.521 }, 00:20:32.521 { 00:20:32.521 "name": "BaseBdev3", 00:20:32.521 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:32.521 "is_configured": true, 00:20:32.521 "data_offset": 0, 00:20:32.521 "data_size": 65536 00:20:32.521 } 00:20:32.521 ] 00:20:32.521 }' 00:20:32.521 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.521 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:32.779 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.780 [2024-11-08 17:08:09.347856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.780 "name": "Existed_Raid", 00:20:32.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.780 "strip_size_kb": 0, 00:20:32.780 "state": "configuring", 00:20:32.780 "raid_level": "raid1", 00:20:32.780 "superblock": false, 00:20:32.780 "num_base_bdevs": 3, 00:20:32.780 "num_base_bdevs_discovered": 1, 00:20:32.780 "num_base_bdevs_operational": 3, 00:20:32.780 "base_bdevs_list": [ 00:20:32.780 { 00:20:32.780 "name": "BaseBdev1", 00:20:32.780 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:32.780 "is_configured": true, 00:20:32.780 "data_offset": 0, 00:20:32.780 "data_size": 65536 00:20:32.780 }, 00:20:32.780 { 00:20:32.780 "name": null, 00:20:32.780 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:32.780 "is_configured": false, 00:20:32.780 "data_offset": 0, 00:20:32.780 "data_size": 65536 00:20:32.780 }, 00:20:32.780 { 00:20:32.780 "name": null, 00:20:32.780 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:32.780 "is_configured": false, 00:20:32.780 "data_offset": 0, 00:20:32.780 "data_size": 65536 00:20:32.780 } 00:20:32.780 ] 00:20:32.780 }' 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.780 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.038 [2024-11-08 17:08:09.720014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.038 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.296 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.296 "name": "Existed_Raid", 00:20:33.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.296 "strip_size_kb": 0, 00:20:33.296 "state": "configuring", 00:20:33.296 "raid_level": "raid1", 00:20:33.296 "superblock": false, 00:20:33.296 "num_base_bdevs": 3, 00:20:33.296 "num_base_bdevs_discovered": 2, 00:20:33.296 "num_base_bdevs_operational": 3, 00:20:33.296 "base_bdevs_list": [ 00:20:33.296 { 00:20:33.296 "name": "BaseBdev1", 00:20:33.296 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:33.296 "is_configured": true, 00:20:33.296 "data_offset": 0, 00:20:33.296 "data_size": 65536 00:20:33.296 }, 00:20:33.296 { 00:20:33.296 "name": null, 00:20:33.296 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:33.296 "is_configured": false, 00:20:33.296 "data_offset": 0, 00:20:33.296 "data_size": 65536 00:20:33.296 }, 00:20:33.296 { 00:20:33.296 "name": "BaseBdev3", 00:20:33.296 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:33.296 "is_configured": true, 00:20:33.296 "data_offset": 0, 00:20:33.296 "data_size": 65536 00:20:33.296 } 00:20:33.296 ] 00:20:33.296 }' 00:20:33.296 17:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.296 17:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.554 [2024-11-08 17:08:10.084121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.554 "name": "Existed_Raid", 00:20:33.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.554 "strip_size_kb": 0, 00:20:33.554 "state": "configuring", 00:20:33.554 "raid_level": "raid1", 00:20:33.554 "superblock": false, 00:20:33.554 "num_base_bdevs": 3, 00:20:33.554 "num_base_bdevs_discovered": 1, 00:20:33.554 "num_base_bdevs_operational": 3, 00:20:33.554 "base_bdevs_list": [ 00:20:33.554 { 00:20:33.554 "name": null, 00:20:33.554 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:33.554 "is_configured": false, 00:20:33.554 "data_offset": 0, 00:20:33.554 "data_size": 65536 00:20:33.554 }, 00:20:33.554 { 00:20:33.554 "name": null, 00:20:33.554 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:33.554 "is_configured": false, 00:20:33.554 "data_offset": 0, 00:20:33.554 "data_size": 65536 00:20:33.554 }, 00:20:33.554 { 00:20:33.554 "name": "BaseBdev3", 00:20:33.554 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:33.554 "is_configured": true, 00:20:33.554 "data_offset": 0, 00:20:33.554 "data_size": 65536 00:20:33.554 } 00:20:33.554 ] 00:20:33.554 }' 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.554 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.812 [2024-11-08 17:08:10.511478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.812 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.070 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.070 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.070 "name": "Existed_Raid", 00:20:34.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.070 "strip_size_kb": 0, 00:20:34.070 "state": "configuring", 00:20:34.070 "raid_level": "raid1", 00:20:34.070 "superblock": false, 00:20:34.070 "num_base_bdevs": 3, 00:20:34.070 "num_base_bdevs_discovered": 2, 00:20:34.070 "num_base_bdevs_operational": 3, 00:20:34.070 "base_bdevs_list": [ 00:20:34.070 { 00:20:34.070 "name": null, 00:20:34.070 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:34.070 "is_configured": false, 00:20:34.070 "data_offset": 0, 00:20:34.070 "data_size": 65536 00:20:34.070 }, 00:20:34.070 { 00:20:34.070 "name": "BaseBdev2", 00:20:34.070 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:34.070 "is_configured": true, 00:20:34.070 "data_offset": 0, 00:20:34.070 "data_size": 65536 00:20:34.070 }, 00:20:34.070 { 00:20:34.070 "name": "BaseBdev3", 00:20:34.070 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:34.070 "is_configured": true, 00:20:34.070 "data_offset": 0, 00:20:34.070 "data_size": 65536 00:20:34.070 } 00:20:34.070 ] 00:20:34.070 }' 00:20:34.070 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.070 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b6b7818d-410e-4f52-bfaf-7e15bf1dfb41 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.333 [2024-11-08 17:08:10.924245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:34.333 [2024-11-08 17:08:10.924459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:34.333 [2024-11-08 17:08:10.924474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:34.333 [2024-11-08 17:08:10.924741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:34.333 [2024-11-08 17:08:10.924923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:34.333 [2024-11-08 17:08:10.924935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:34.333 [2024-11-08 17:08:10.925184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.333 NewBaseBdev 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.333 [ 00:20:34.333 { 00:20:34.333 "name": "NewBaseBdev", 00:20:34.333 "aliases": [ 00:20:34.333 "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41" 00:20:34.333 ], 00:20:34.333 "product_name": "Malloc disk", 00:20:34.333 "block_size": 512, 00:20:34.333 "num_blocks": 65536, 00:20:34.333 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:34.333 "assigned_rate_limits": { 00:20:34.333 "rw_ios_per_sec": 0, 00:20:34.333 "rw_mbytes_per_sec": 0, 00:20:34.333 "r_mbytes_per_sec": 0, 00:20:34.333 "w_mbytes_per_sec": 0 00:20:34.333 }, 00:20:34.333 "claimed": true, 00:20:34.333 "claim_type": "exclusive_write", 00:20:34.333 "zoned": false, 00:20:34.333 "supported_io_types": { 00:20:34.333 "read": true, 00:20:34.333 "write": true, 00:20:34.333 "unmap": true, 00:20:34.333 "flush": true, 00:20:34.333 "reset": true, 00:20:34.333 "nvme_admin": false, 00:20:34.333 "nvme_io": false, 00:20:34.333 "nvme_io_md": false, 00:20:34.333 "write_zeroes": true, 00:20:34.333 "zcopy": true, 00:20:34.333 "get_zone_info": false, 00:20:34.333 "zone_management": false, 00:20:34.333 "zone_append": false, 00:20:34.333 "compare": false, 00:20:34.333 "compare_and_write": false, 00:20:34.333 "abort": true, 00:20:34.333 "seek_hole": false, 00:20:34.333 "seek_data": false, 00:20:34.333 "copy": true, 00:20:34.333 "nvme_iov_md": false 00:20:34.333 }, 00:20:34.333 "memory_domains": [ 00:20:34.333 { 00:20:34.333 "dma_device_id": "system", 00:20:34.333 "dma_device_type": 1 00:20:34.333 }, 00:20:34.333 { 00:20:34.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.333 "dma_device_type": 2 00:20:34.333 } 00:20:34.333 ], 00:20:34.333 "driver_specific": {} 00:20:34.333 } 00:20:34.333 ] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.333 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.334 "name": "Existed_Raid", 00:20:34.334 "uuid": "6659801c-8757-4469-8071-cc9014cfeceb", 00:20:34.334 "strip_size_kb": 0, 00:20:34.334 "state": "online", 00:20:34.334 "raid_level": "raid1", 00:20:34.334 "superblock": false, 00:20:34.334 "num_base_bdevs": 3, 00:20:34.334 "num_base_bdevs_discovered": 3, 00:20:34.334 "num_base_bdevs_operational": 3, 00:20:34.334 "base_bdevs_list": [ 00:20:34.334 { 00:20:34.334 "name": "NewBaseBdev", 00:20:34.334 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:34.334 "is_configured": true, 00:20:34.334 "data_offset": 0, 00:20:34.334 "data_size": 65536 00:20:34.334 }, 00:20:34.334 { 00:20:34.334 "name": "BaseBdev2", 00:20:34.334 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:34.334 "is_configured": true, 00:20:34.334 "data_offset": 0, 00:20:34.334 "data_size": 65536 00:20:34.334 }, 00:20:34.334 { 00:20:34.334 "name": "BaseBdev3", 00:20:34.334 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:34.334 "is_configured": true, 00:20:34.334 "data_offset": 0, 00:20:34.334 "data_size": 65536 00:20:34.334 } 00:20:34.334 ] 00:20:34.334 }' 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.334 17:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.604 [2024-11-08 17:08:11.260710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.604 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:34.604 "name": "Existed_Raid", 00:20:34.604 "aliases": [ 00:20:34.604 "6659801c-8757-4469-8071-cc9014cfeceb" 00:20:34.604 ], 00:20:34.604 "product_name": "Raid Volume", 00:20:34.604 "block_size": 512, 00:20:34.604 "num_blocks": 65536, 00:20:34.604 "uuid": "6659801c-8757-4469-8071-cc9014cfeceb", 00:20:34.604 "assigned_rate_limits": { 00:20:34.604 "rw_ios_per_sec": 0, 00:20:34.604 "rw_mbytes_per_sec": 0, 00:20:34.604 "r_mbytes_per_sec": 0, 00:20:34.604 "w_mbytes_per_sec": 0 00:20:34.604 }, 00:20:34.604 "claimed": false, 00:20:34.604 "zoned": false, 00:20:34.604 "supported_io_types": { 00:20:34.604 "read": true, 00:20:34.604 "write": true, 00:20:34.604 "unmap": false, 00:20:34.604 "flush": false, 00:20:34.604 "reset": true, 00:20:34.604 "nvme_admin": false, 00:20:34.604 "nvme_io": false, 00:20:34.604 "nvme_io_md": false, 00:20:34.604 "write_zeroes": true, 00:20:34.604 "zcopy": false, 00:20:34.604 "get_zone_info": false, 00:20:34.604 "zone_management": false, 00:20:34.604 "zone_append": false, 00:20:34.604 "compare": false, 00:20:34.604 "compare_and_write": false, 00:20:34.604 "abort": false, 00:20:34.604 "seek_hole": false, 00:20:34.604 "seek_data": false, 00:20:34.604 "copy": false, 00:20:34.604 "nvme_iov_md": false 00:20:34.604 }, 00:20:34.604 "memory_domains": [ 00:20:34.604 { 00:20:34.604 "dma_device_id": "system", 00:20:34.604 "dma_device_type": 1 00:20:34.604 }, 00:20:34.604 { 00:20:34.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.604 "dma_device_type": 2 00:20:34.604 }, 00:20:34.604 { 00:20:34.604 "dma_device_id": "system", 00:20:34.604 "dma_device_type": 1 00:20:34.604 }, 00:20:34.604 { 00:20:34.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.604 "dma_device_type": 2 00:20:34.604 }, 00:20:34.604 { 00:20:34.604 "dma_device_id": "system", 00:20:34.604 "dma_device_type": 1 00:20:34.604 }, 00:20:34.604 { 00:20:34.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.604 "dma_device_type": 2 00:20:34.604 } 00:20:34.604 ], 00:20:34.604 "driver_specific": { 00:20:34.604 "raid": { 00:20:34.604 "uuid": "6659801c-8757-4469-8071-cc9014cfeceb", 00:20:34.604 "strip_size_kb": 0, 00:20:34.604 "state": "online", 00:20:34.604 "raid_level": "raid1", 00:20:34.604 "superblock": false, 00:20:34.604 "num_base_bdevs": 3, 00:20:34.604 "num_base_bdevs_discovered": 3, 00:20:34.604 "num_base_bdevs_operational": 3, 00:20:34.604 "base_bdevs_list": [ 00:20:34.604 { 00:20:34.604 "name": "NewBaseBdev", 00:20:34.604 "uuid": "b6b7818d-410e-4f52-bfaf-7e15bf1dfb41", 00:20:34.604 "is_configured": true, 00:20:34.604 "data_offset": 0, 00:20:34.604 "data_size": 65536 00:20:34.604 }, 00:20:34.604 { 00:20:34.604 "name": "BaseBdev2", 00:20:34.604 "uuid": "210bb9d3-3210-4943-a31f-92b17a8711b9", 00:20:34.604 "is_configured": true, 00:20:34.604 "data_offset": 0, 00:20:34.605 "data_size": 65536 00:20:34.605 }, 00:20:34.605 { 00:20:34.605 "name": "BaseBdev3", 00:20:34.605 "uuid": "c19c833a-26d1-4f00-8826-9d6e64ffdc78", 00:20:34.605 "is_configured": true, 00:20:34.605 "data_offset": 0, 00:20:34.605 "data_size": 65536 00:20:34.605 } 00:20:34.605 ] 00:20:34.605 } 00:20:34.605 } 00:20:34.605 }' 00:20:34.605 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:34.863 BaseBdev2 00:20:34.863 BaseBdev3' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.863 [2024-11-08 17:08:11.440413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.863 [2024-11-08 17:08:11.440531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.863 [2024-11-08 17:08:11.440659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.863 [2024-11-08 17:08:11.440985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.863 [2024-11-08 17:08:11.441026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66139 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 66139 ']' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 66139 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66139 00:20:34.863 killing process with pid 66139 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66139' 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 66139 00:20:34.863 [2024-11-08 17:08:11.477455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:34.863 17:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 66139 00:20:35.122 [2024-11-08 17:08:11.676337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:36.055 00:20:36.055 real 0m8.022s 00:20:36.055 user 0m12.685s 00:20:36.055 sys 0m1.342s 00:20:36.055 ************************************ 00:20:36.055 END TEST raid_state_function_test 00:20:36.055 ************************************ 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.055 17:08:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:36.055 17:08:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:36.055 17:08:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:36.055 17:08:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.055 ************************************ 00:20:36.055 START TEST raid_state_function_test_sb 00:20:36.055 ************************************ 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:36.055 Process raid pid: 66738 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66738 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66738' 00:20:36.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66738 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66738 ']' 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.055 17:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:36.055 [2024-11-08 17:08:12.574470] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:36.055 [2024-11-08 17:08:12.574614] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.055 [2024-11-08 17:08:12.733793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.313 [2024-11-08 17:08:12.851206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.313 [2024-11-08 17:08:13.001694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.313 [2024-11-08 17:08:13.001859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.880 [2024-11-08 17:08:13.428934] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.880 [2024-11-08 17:08:13.429088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.880 [2024-11-08 17:08:13.429152] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.880 [2024-11-08 17:08:13.429180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.880 [2024-11-08 17:08:13.429198] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.880 [2024-11-08 17:08:13.429219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.880 "name": "Existed_Raid", 00:20:36.880 "uuid": "a7f53fb8-ca45-4efd-bcff-6a8ceabf8462", 00:20:36.880 "strip_size_kb": 0, 00:20:36.880 "state": "configuring", 00:20:36.880 "raid_level": "raid1", 00:20:36.880 "superblock": true, 00:20:36.880 "num_base_bdevs": 3, 00:20:36.880 "num_base_bdevs_discovered": 0, 00:20:36.880 "num_base_bdevs_operational": 3, 00:20:36.880 "base_bdevs_list": [ 00:20:36.880 { 00:20:36.880 "name": "BaseBdev1", 00:20:36.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.880 "is_configured": false, 00:20:36.880 "data_offset": 0, 00:20:36.880 "data_size": 0 00:20:36.880 }, 00:20:36.880 { 00:20:36.880 "name": "BaseBdev2", 00:20:36.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.880 "is_configured": false, 00:20:36.880 "data_offset": 0, 00:20:36.880 "data_size": 0 00:20:36.880 }, 00:20:36.880 { 00:20:36.880 "name": "BaseBdev3", 00:20:36.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.880 "is_configured": false, 00:20:36.880 "data_offset": 0, 00:20:36.880 "data_size": 0 00:20:36.880 } 00:20:36.880 ] 00:20:36.880 }' 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.880 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.138 [2024-11-08 17:08:13.804982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.138 [2024-11-08 17:08:13.805025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.138 [2024-11-08 17:08:13.816990] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:37.138 [2024-11-08 17:08:13.817130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:37.138 [2024-11-08 17:08:13.817185] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:37.138 [2024-11-08 17:08:13.817212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:37.138 [2024-11-08 17:08:13.817254] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:37.138 [2024-11-08 17:08:13.817279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.138 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.396 [2024-11-08 17:08:13.851852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.396 BaseBdev1 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.396 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.397 [ 00:20:37.397 { 00:20:37.397 "name": "BaseBdev1", 00:20:37.397 "aliases": [ 00:20:37.397 "1a5adb64-5cd4-4dce-a1fd-24a7ed205999" 00:20:37.397 ], 00:20:37.397 "product_name": "Malloc disk", 00:20:37.397 "block_size": 512, 00:20:37.397 "num_blocks": 65536, 00:20:37.397 "uuid": "1a5adb64-5cd4-4dce-a1fd-24a7ed205999", 00:20:37.397 "assigned_rate_limits": { 00:20:37.397 "rw_ios_per_sec": 0, 00:20:37.397 "rw_mbytes_per_sec": 0, 00:20:37.397 "r_mbytes_per_sec": 0, 00:20:37.397 "w_mbytes_per_sec": 0 00:20:37.397 }, 00:20:37.397 "claimed": true, 00:20:37.397 "claim_type": "exclusive_write", 00:20:37.397 "zoned": false, 00:20:37.397 "supported_io_types": { 00:20:37.397 "read": true, 00:20:37.397 "write": true, 00:20:37.397 "unmap": true, 00:20:37.397 "flush": true, 00:20:37.397 "reset": true, 00:20:37.397 "nvme_admin": false, 00:20:37.397 "nvme_io": false, 00:20:37.397 "nvme_io_md": false, 00:20:37.397 "write_zeroes": true, 00:20:37.397 "zcopy": true, 00:20:37.397 "get_zone_info": false, 00:20:37.397 "zone_management": false, 00:20:37.397 "zone_append": false, 00:20:37.397 "compare": false, 00:20:37.397 "compare_and_write": false, 00:20:37.397 "abort": true, 00:20:37.397 "seek_hole": false, 00:20:37.397 "seek_data": false, 00:20:37.397 "copy": true, 00:20:37.397 "nvme_iov_md": false 00:20:37.397 }, 00:20:37.397 "memory_domains": [ 00:20:37.397 { 00:20:37.397 "dma_device_id": "system", 00:20:37.397 "dma_device_type": 1 00:20:37.397 }, 00:20:37.397 { 00:20:37.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.397 "dma_device_type": 2 00:20:37.397 } 00:20:37.397 ], 00:20:37.397 "driver_specific": {} 00:20:37.397 } 00:20:37.397 ] 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.397 "name": "Existed_Raid", 00:20:37.397 "uuid": "25a48310-6462-44d5-8412-cdfe70e3005b", 00:20:37.397 "strip_size_kb": 0, 00:20:37.397 "state": "configuring", 00:20:37.397 "raid_level": "raid1", 00:20:37.397 "superblock": true, 00:20:37.397 "num_base_bdevs": 3, 00:20:37.397 "num_base_bdevs_discovered": 1, 00:20:37.397 "num_base_bdevs_operational": 3, 00:20:37.397 "base_bdevs_list": [ 00:20:37.397 { 00:20:37.397 "name": "BaseBdev1", 00:20:37.397 "uuid": "1a5adb64-5cd4-4dce-a1fd-24a7ed205999", 00:20:37.397 "is_configured": true, 00:20:37.397 "data_offset": 2048, 00:20:37.397 "data_size": 63488 00:20:37.397 }, 00:20:37.397 { 00:20:37.397 "name": "BaseBdev2", 00:20:37.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.397 "is_configured": false, 00:20:37.397 "data_offset": 0, 00:20:37.397 "data_size": 0 00:20:37.397 }, 00:20:37.397 { 00:20:37.397 "name": "BaseBdev3", 00:20:37.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.397 "is_configured": false, 00:20:37.397 "data_offset": 0, 00:20:37.397 "data_size": 0 00:20:37.397 } 00:20:37.397 ] 00:20:37.397 }' 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.397 17:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.657 [2024-11-08 17:08:14.204046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.657 [2024-11-08 17:08:14.204111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.657 [2024-11-08 17:08:14.212111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.657 [2024-11-08 17:08:14.214288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:37.657 [2024-11-08 17:08:14.214424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:37.657 [2024-11-08 17:08:14.214486] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:37.657 [2024-11-08 17:08:14.214513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.657 "name": "Existed_Raid", 00:20:37.657 "uuid": "20441cad-a51b-4225-9505-78634b2ba3b2", 00:20:37.657 "strip_size_kb": 0, 00:20:37.657 "state": "configuring", 00:20:37.657 "raid_level": "raid1", 00:20:37.657 "superblock": true, 00:20:37.657 "num_base_bdevs": 3, 00:20:37.657 "num_base_bdevs_discovered": 1, 00:20:37.657 "num_base_bdevs_operational": 3, 00:20:37.657 "base_bdevs_list": [ 00:20:37.657 { 00:20:37.657 "name": "BaseBdev1", 00:20:37.657 "uuid": "1a5adb64-5cd4-4dce-a1fd-24a7ed205999", 00:20:37.657 "is_configured": true, 00:20:37.657 "data_offset": 2048, 00:20:37.657 "data_size": 63488 00:20:37.657 }, 00:20:37.657 { 00:20:37.657 "name": "BaseBdev2", 00:20:37.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.657 "is_configured": false, 00:20:37.657 "data_offset": 0, 00:20:37.657 "data_size": 0 00:20:37.657 }, 00:20:37.657 { 00:20:37.657 "name": "BaseBdev3", 00:20:37.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.657 "is_configured": false, 00:20:37.657 "data_offset": 0, 00:20:37.657 "data_size": 0 00:20:37.657 } 00:20:37.657 ] 00:20:37.657 }' 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.657 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.914 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.915 [2024-11-08 17:08:14.572874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.915 BaseBdev2 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.915 [ 00:20:37.915 { 00:20:37.915 "name": "BaseBdev2", 00:20:37.915 "aliases": [ 00:20:37.915 "cff41b48-b41b-4834-9d96-50026fb70184" 00:20:37.915 ], 00:20:37.915 "product_name": "Malloc disk", 00:20:37.915 "block_size": 512, 00:20:37.915 "num_blocks": 65536, 00:20:37.915 "uuid": "cff41b48-b41b-4834-9d96-50026fb70184", 00:20:37.915 "assigned_rate_limits": { 00:20:37.915 "rw_ios_per_sec": 0, 00:20:37.915 "rw_mbytes_per_sec": 0, 00:20:37.915 "r_mbytes_per_sec": 0, 00:20:37.915 "w_mbytes_per_sec": 0 00:20:37.915 }, 00:20:37.915 "claimed": true, 00:20:37.915 "claim_type": "exclusive_write", 00:20:37.915 "zoned": false, 00:20:37.915 "supported_io_types": { 00:20:37.915 "read": true, 00:20:37.915 "write": true, 00:20:37.915 "unmap": true, 00:20:37.915 "flush": true, 00:20:37.915 "reset": true, 00:20:37.915 "nvme_admin": false, 00:20:37.915 "nvme_io": false, 00:20:37.915 "nvme_io_md": false, 00:20:37.915 "write_zeroes": true, 00:20:37.915 "zcopy": true, 00:20:37.915 "get_zone_info": false, 00:20:37.915 "zone_management": false, 00:20:37.915 "zone_append": false, 00:20:37.915 "compare": false, 00:20:37.915 "compare_and_write": false, 00:20:37.915 "abort": true, 00:20:37.915 "seek_hole": false, 00:20:37.915 "seek_data": false, 00:20:37.915 "copy": true, 00:20:37.915 "nvme_iov_md": false 00:20:37.915 }, 00:20:37.915 "memory_domains": [ 00:20:37.915 { 00:20:37.915 "dma_device_id": "system", 00:20:37.915 "dma_device_type": 1 00:20:37.915 }, 00:20:37.915 { 00:20:37.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.915 "dma_device_type": 2 00:20:37.915 } 00:20:37.915 ], 00:20:37.915 "driver_specific": {} 00:20:37.915 } 00:20:37.915 ] 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.915 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.172 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.172 "name": "Existed_Raid", 00:20:38.172 "uuid": "20441cad-a51b-4225-9505-78634b2ba3b2", 00:20:38.172 "strip_size_kb": 0, 00:20:38.172 "state": "configuring", 00:20:38.172 "raid_level": "raid1", 00:20:38.172 "superblock": true, 00:20:38.172 "num_base_bdevs": 3, 00:20:38.172 "num_base_bdevs_discovered": 2, 00:20:38.172 "num_base_bdevs_operational": 3, 00:20:38.172 "base_bdevs_list": [ 00:20:38.173 { 00:20:38.173 "name": "BaseBdev1", 00:20:38.173 "uuid": "1a5adb64-5cd4-4dce-a1fd-24a7ed205999", 00:20:38.173 "is_configured": true, 00:20:38.173 "data_offset": 2048, 00:20:38.173 "data_size": 63488 00:20:38.173 }, 00:20:38.173 { 00:20:38.173 "name": "BaseBdev2", 00:20:38.173 "uuid": "cff41b48-b41b-4834-9d96-50026fb70184", 00:20:38.173 "is_configured": true, 00:20:38.173 "data_offset": 2048, 00:20:38.173 "data_size": 63488 00:20:38.173 }, 00:20:38.173 { 00:20:38.173 "name": "BaseBdev3", 00:20:38.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.173 "is_configured": false, 00:20:38.173 "data_offset": 0, 00:20:38.173 "data_size": 0 00:20:38.173 } 00:20:38.173 ] 00:20:38.173 }' 00:20:38.173 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.173 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.430 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:38.430 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.431 BaseBdev3 00:20:38.431 [2024-11-08 17:08:14.997099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:38.431 [2024-11-08 17:08:14.997372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:38.431 [2024-11-08 17:08:14.997394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:38.431 [2024-11-08 17:08:14.997682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:38.431 [2024-11-08 17:08:14.997847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:38.431 [2024-11-08 17:08:14.997857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:38.431 [2024-11-08 17:08:14.998002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.431 17:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.431 [ 00:20:38.431 { 00:20:38.431 "name": "BaseBdev3", 00:20:38.431 "aliases": [ 00:20:38.431 "1918a977-92c8-48b8-b692-977b519def49" 00:20:38.431 ], 00:20:38.431 "product_name": "Malloc disk", 00:20:38.431 "block_size": 512, 00:20:38.431 "num_blocks": 65536, 00:20:38.431 "uuid": "1918a977-92c8-48b8-b692-977b519def49", 00:20:38.431 "assigned_rate_limits": { 00:20:38.431 "rw_ios_per_sec": 0, 00:20:38.431 "rw_mbytes_per_sec": 0, 00:20:38.431 "r_mbytes_per_sec": 0, 00:20:38.431 "w_mbytes_per_sec": 0 00:20:38.431 }, 00:20:38.431 "claimed": true, 00:20:38.431 "claim_type": "exclusive_write", 00:20:38.431 "zoned": false, 00:20:38.431 "supported_io_types": { 00:20:38.431 "read": true, 00:20:38.431 "write": true, 00:20:38.431 "unmap": true, 00:20:38.431 "flush": true, 00:20:38.431 "reset": true, 00:20:38.431 "nvme_admin": false, 00:20:38.431 "nvme_io": false, 00:20:38.431 "nvme_io_md": false, 00:20:38.431 "write_zeroes": true, 00:20:38.431 "zcopy": true, 00:20:38.431 "get_zone_info": false, 00:20:38.431 "zone_management": false, 00:20:38.431 "zone_append": false, 00:20:38.431 "compare": false, 00:20:38.431 "compare_and_write": false, 00:20:38.431 "abort": true, 00:20:38.431 "seek_hole": false, 00:20:38.431 "seek_data": false, 00:20:38.431 "copy": true, 00:20:38.431 "nvme_iov_md": false 00:20:38.431 }, 00:20:38.431 "memory_domains": [ 00:20:38.431 { 00:20:38.431 "dma_device_id": "system", 00:20:38.431 "dma_device_type": 1 00:20:38.431 }, 00:20:38.431 { 00:20:38.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.431 "dma_device_type": 2 00:20:38.431 } 00:20:38.431 ], 00:20:38.431 "driver_specific": {} 00:20:38.431 } 00:20:38.431 ] 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.431 "name": "Existed_Raid", 00:20:38.431 "uuid": "20441cad-a51b-4225-9505-78634b2ba3b2", 00:20:38.431 "strip_size_kb": 0, 00:20:38.431 "state": "online", 00:20:38.431 "raid_level": "raid1", 00:20:38.431 "superblock": true, 00:20:38.431 "num_base_bdevs": 3, 00:20:38.431 "num_base_bdevs_discovered": 3, 00:20:38.431 "num_base_bdevs_operational": 3, 00:20:38.431 "base_bdevs_list": [ 00:20:38.431 { 00:20:38.431 "name": "BaseBdev1", 00:20:38.431 "uuid": "1a5adb64-5cd4-4dce-a1fd-24a7ed205999", 00:20:38.431 "is_configured": true, 00:20:38.431 "data_offset": 2048, 00:20:38.431 "data_size": 63488 00:20:38.431 }, 00:20:38.431 { 00:20:38.431 "name": "BaseBdev2", 00:20:38.431 "uuid": "cff41b48-b41b-4834-9d96-50026fb70184", 00:20:38.431 "is_configured": true, 00:20:38.431 "data_offset": 2048, 00:20:38.431 "data_size": 63488 00:20:38.431 }, 00:20:38.431 { 00:20:38.431 "name": "BaseBdev3", 00:20:38.431 "uuid": "1918a977-92c8-48b8-b692-977b519def49", 00:20:38.431 "is_configured": true, 00:20:38.431 "data_offset": 2048, 00:20:38.431 "data_size": 63488 00:20:38.431 } 00:20:38.431 ] 00:20:38.431 }' 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.431 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.689 [2024-11-08 17:08:15.365627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.689 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.689 "name": "Existed_Raid", 00:20:38.689 "aliases": [ 00:20:38.689 "20441cad-a51b-4225-9505-78634b2ba3b2" 00:20:38.689 ], 00:20:38.689 "product_name": "Raid Volume", 00:20:38.689 "block_size": 512, 00:20:38.689 "num_blocks": 63488, 00:20:38.689 "uuid": "20441cad-a51b-4225-9505-78634b2ba3b2", 00:20:38.689 "assigned_rate_limits": { 00:20:38.689 "rw_ios_per_sec": 0, 00:20:38.689 "rw_mbytes_per_sec": 0, 00:20:38.689 "r_mbytes_per_sec": 0, 00:20:38.689 "w_mbytes_per_sec": 0 00:20:38.689 }, 00:20:38.689 "claimed": false, 00:20:38.689 "zoned": false, 00:20:38.689 "supported_io_types": { 00:20:38.689 "read": true, 00:20:38.689 "write": true, 00:20:38.689 "unmap": false, 00:20:38.689 "flush": false, 00:20:38.689 "reset": true, 00:20:38.689 "nvme_admin": false, 00:20:38.689 "nvme_io": false, 00:20:38.689 "nvme_io_md": false, 00:20:38.689 "write_zeroes": true, 00:20:38.689 "zcopy": false, 00:20:38.689 "get_zone_info": false, 00:20:38.689 "zone_management": false, 00:20:38.689 "zone_append": false, 00:20:38.689 "compare": false, 00:20:38.689 "compare_and_write": false, 00:20:38.689 "abort": false, 00:20:38.689 "seek_hole": false, 00:20:38.689 "seek_data": false, 00:20:38.689 "copy": false, 00:20:38.689 "nvme_iov_md": false 00:20:38.689 }, 00:20:38.689 "memory_domains": [ 00:20:38.689 { 00:20:38.689 "dma_device_id": "system", 00:20:38.689 "dma_device_type": 1 00:20:38.689 }, 00:20:38.689 { 00:20:38.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.689 "dma_device_type": 2 00:20:38.690 }, 00:20:38.690 { 00:20:38.690 "dma_device_id": "system", 00:20:38.690 "dma_device_type": 1 00:20:38.690 }, 00:20:38.690 { 00:20:38.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.690 "dma_device_type": 2 00:20:38.690 }, 00:20:38.690 { 00:20:38.690 "dma_device_id": "system", 00:20:38.690 "dma_device_type": 1 00:20:38.690 }, 00:20:38.690 { 00:20:38.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.690 "dma_device_type": 2 00:20:38.690 } 00:20:38.690 ], 00:20:38.690 "driver_specific": { 00:20:38.690 "raid": { 00:20:38.690 "uuid": "20441cad-a51b-4225-9505-78634b2ba3b2", 00:20:38.690 "strip_size_kb": 0, 00:20:38.690 "state": "online", 00:20:38.690 "raid_level": "raid1", 00:20:38.690 "superblock": true, 00:20:38.690 "num_base_bdevs": 3, 00:20:38.690 "num_base_bdevs_discovered": 3, 00:20:38.690 "num_base_bdevs_operational": 3, 00:20:38.690 "base_bdevs_list": [ 00:20:38.690 { 00:20:38.690 "name": "BaseBdev1", 00:20:38.690 "uuid": "1a5adb64-5cd4-4dce-a1fd-24a7ed205999", 00:20:38.690 "is_configured": true, 00:20:38.690 "data_offset": 2048, 00:20:38.690 "data_size": 63488 00:20:38.690 }, 00:20:38.690 { 00:20:38.690 "name": "BaseBdev2", 00:20:38.690 "uuid": "cff41b48-b41b-4834-9d96-50026fb70184", 00:20:38.690 "is_configured": true, 00:20:38.690 "data_offset": 2048, 00:20:38.690 "data_size": 63488 00:20:38.690 }, 00:20:38.690 { 00:20:38.690 "name": "BaseBdev3", 00:20:38.690 "uuid": "1918a977-92c8-48b8-b692-977b519def49", 00:20:38.690 "is_configured": true, 00:20:38.690 "data_offset": 2048, 00:20:38.690 "data_size": 63488 00:20:38.690 } 00:20:38.690 ] 00:20:38.690 } 00:20:38.690 } 00:20:38.690 }' 00:20:38.690 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:38.948 BaseBdev2 00:20:38.948 BaseBdev3' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.948 [2024-11-08 17:08:15.561365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.948 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.206 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.206 "name": "Existed_Raid", 00:20:39.206 "uuid": "20441cad-a51b-4225-9505-78634b2ba3b2", 00:20:39.206 "strip_size_kb": 0, 00:20:39.206 "state": "online", 00:20:39.206 "raid_level": "raid1", 00:20:39.206 "superblock": true, 00:20:39.206 "num_base_bdevs": 3, 00:20:39.206 "num_base_bdevs_discovered": 2, 00:20:39.206 "num_base_bdevs_operational": 2, 00:20:39.206 "base_bdevs_list": [ 00:20:39.206 { 00:20:39.206 "name": null, 00:20:39.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.206 "is_configured": false, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 63488 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev2", 00:20:39.206 "uuid": "cff41b48-b41b-4834-9d96-50026fb70184", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 2048, 00:20:39.206 "data_size": 63488 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev3", 00:20:39.206 "uuid": "1918a977-92c8-48b8-b692-977b519def49", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 2048, 00:20:39.206 "data_size": 63488 00:20:39.206 } 00:20:39.206 ] 00:20:39.206 }' 00:20:39.206 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.206 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.464 17:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.464 [2024-11-08 17:08:16.001743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.464 [2024-11-08 17:08:16.106163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:39.464 [2024-11-08 17:08:16.106391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.464 [2024-11-08 17:08:16.170680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.464 [2024-11-08 17:08:16.170868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.464 [2024-11-08 17:08:16.170890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.464 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.722 BaseBdev2 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.722 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.722 [ 00:20:39.722 { 00:20:39.722 "name": "BaseBdev2", 00:20:39.722 "aliases": [ 00:20:39.722 "ea044538-800e-43a2-8a99-3d6b95e1bb8e" 00:20:39.722 ], 00:20:39.722 "product_name": "Malloc disk", 00:20:39.722 "block_size": 512, 00:20:39.722 "num_blocks": 65536, 00:20:39.722 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:39.722 "assigned_rate_limits": { 00:20:39.722 "rw_ios_per_sec": 0, 00:20:39.722 "rw_mbytes_per_sec": 0, 00:20:39.722 "r_mbytes_per_sec": 0, 00:20:39.722 "w_mbytes_per_sec": 0 00:20:39.722 }, 00:20:39.722 "claimed": false, 00:20:39.722 "zoned": false, 00:20:39.722 "supported_io_types": { 00:20:39.722 "read": true, 00:20:39.722 "write": true, 00:20:39.722 "unmap": true, 00:20:39.722 "flush": true, 00:20:39.722 "reset": true, 00:20:39.722 "nvme_admin": false, 00:20:39.723 "nvme_io": false, 00:20:39.723 "nvme_io_md": false, 00:20:39.723 "write_zeroes": true, 00:20:39.723 "zcopy": true, 00:20:39.723 "get_zone_info": false, 00:20:39.723 "zone_management": false, 00:20:39.723 "zone_append": false, 00:20:39.723 "compare": false, 00:20:39.723 "compare_and_write": false, 00:20:39.723 "abort": true, 00:20:39.723 "seek_hole": false, 00:20:39.723 "seek_data": false, 00:20:39.723 "copy": true, 00:20:39.723 "nvme_iov_md": false 00:20:39.723 }, 00:20:39.723 "memory_domains": [ 00:20:39.723 { 00:20:39.723 "dma_device_id": "system", 00:20:39.723 "dma_device_type": 1 00:20:39.723 }, 00:20:39.723 { 00:20:39.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.723 "dma_device_type": 2 00:20:39.723 } 00:20:39.723 ], 00:20:39.723 "driver_specific": {} 00:20:39.723 } 00:20:39.723 ] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.723 BaseBdev3 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.723 [ 00:20:39.723 { 00:20:39.723 "name": "BaseBdev3", 00:20:39.723 "aliases": [ 00:20:39.723 "275b56c8-e168-4247-b366-5c71398df951" 00:20:39.723 ], 00:20:39.723 "product_name": "Malloc disk", 00:20:39.723 "block_size": 512, 00:20:39.723 "num_blocks": 65536, 00:20:39.723 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:39.723 "assigned_rate_limits": { 00:20:39.723 "rw_ios_per_sec": 0, 00:20:39.723 "rw_mbytes_per_sec": 0, 00:20:39.723 "r_mbytes_per_sec": 0, 00:20:39.723 "w_mbytes_per_sec": 0 00:20:39.723 }, 00:20:39.723 "claimed": false, 00:20:39.723 "zoned": false, 00:20:39.723 "supported_io_types": { 00:20:39.723 "read": true, 00:20:39.723 "write": true, 00:20:39.723 "unmap": true, 00:20:39.723 "flush": true, 00:20:39.723 "reset": true, 00:20:39.723 "nvme_admin": false, 00:20:39.723 "nvme_io": false, 00:20:39.723 "nvme_io_md": false, 00:20:39.723 "write_zeroes": true, 00:20:39.723 "zcopy": true, 00:20:39.723 "get_zone_info": false, 00:20:39.723 "zone_management": false, 00:20:39.723 "zone_append": false, 00:20:39.723 "compare": false, 00:20:39.723 "compare_and_write": false, 00:20:39.723 "abort": true, 00:20:39.723 "seek_hole": false, 00:20:39.723 "seek_data": false, 00:20:39.723 "copy": true, 00:20:39.723 "nvme_iov_md": false 00:20:39.723 }, 00:20:39.723 "memory_domains": [ 00:20:39.723 { 00:20:39.723 "dma_device_id": "system", 00:20:39.723 "dma_device_type": 1 00:20:39.723 }, 00:20:39.723 { 00:20:39.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.723 "dma_device_type": 2 00:20:39.723 } 00:20:39.723 ], 00:20:39.723 "driver_specific": {} 00:20:39.723 } 00:20:39.723 ] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.723 [2024-11-08 17:08:16.319629] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:39.723 [2024-11-08 17:08:16.319796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:39.723 [2024-11-08 17:08:16.319877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.723 [2024-11-08 17:08:16.321971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.723 "name": "Existed_Raid", 00:20:39.723 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:39.723 "strip_size_kb": 0, 00:20:39.723 "state": "configuring", 00:20:39.723 "raid_level": "raid1", 00:20:39.723 "superblock": true, 00:20:39.723 "num_base_bdevs": 3, 00:20:39.723 "num_base_bdevs_discovered": 2, 00:20:39.723 "num_base_bdevs_operational": 3, 00:20:39.723 "base_bdevs_list": [ 00:20:39.723 { 00:20:39.723 "name": "BaseBdev1", 00:20:39.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.723 "is_configured": false, 00:20:39.723 "data_offset": 0, 00:20:39.723 "data_size": 0 00:20:39.723 }, 00:20:39.723 { 00:20:39.723 "name": "BaseBdev2", 00:20:39.723 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:39.723 "is_configured": true, 00:20:39.723 "data_offset": 2048, 00:20:39.723 "data_size": 63488 00:20:39.723 }, 00:20:39.723 { 00:20:39.723 "name": "BaseBdev3", 00:20:39.723 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:39.723 "is_configured": true, 00:20:39.723 "data_offset": 2048, 00:20:39.723 "data_size": 63488 00:20:39.723 } 00:20:39.723 ] 00:20:39.723 }' 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.723 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 [2024-11-08 17:08:16.643735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.981 "name": "Existed_Raid", 00:20:39.981 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:39.981 "strip_size_kb": 0, 00:20:39.981 "state": "configuring", 00:20:39.981 "raid_level": "raid1", 00:20:39.981 "superblock": true, 00:20:39.981 "num_base_bdevs": 3, 00:20:39.981 "num_base_bdevs_discovered": 1, 00:20:39.981 "num_base_bdevs_operational": 3, 00:20:39.981 "base_bdevs_list": [ 00:20:39.981 { 00:20:39.981 "name": "BaseBdev1", 00:20:39.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.981 "is_configured": false, 00:20:39.981 "data_offset": 0, 00:20:39.981 "data_size": 0 00:20:39.981 }, 00:20:39.981 { 00:20:39.981 "name": null, 00:20:39.981 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:39.981 "is_configured": false, 00:20:39.981 "data_offset": 0, 00:20:39.981 "data_size": 63488 00:20:39.981 }, 00:20:39.981 { 00:20:39.981 "name": "BaseBdev3", 00:20:39.981 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:39.981 "is_configured": true, 00:20:39.981 "data_offset": 2048, 00:20:39.981 "data_size": 63488 00:20:39.981 } 00:20:39.981 ] 00:20:39.981 }' 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.981 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.547 17:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.547 [2024-11-08 17:08:17.016387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.547 BaseBdev1 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.547 [ 00:20:40.547 { 00:20:40.547 "name": "BaseBdev1", 00:20:40.547 "aliases": [ 00:20:40.547 "eb112646-eb1b-4717-8f60-1b1e36eca47e" 00:20:40.547 ], 00:20:40.547 "product_name": "Malloc disk", 00:20:40.547 "block_size": 512, 00:20:40.547 "num_blocks": 65536, 00:20:40.547 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:40.547 "assigned_rate_limits": { 00:20:40.547 "rw_ios_per_sec": 0, 00:20:40.547 "rw_mbytes_per_sec": 0, 00:20:40.547 "r_mbytes_per_sec": 0, 00:20:40.547 "w_mbytes_per_sec": 0 00:20:40.547 }, 00:20:40.547 "claimed": true, 00:20:40.547 "claim_type": "exclusive_write", 00:20:40.547 "zoned": false, 00:20:40.547 "supported_io_types": { 00:20:40.547 "read": true, 00:20:40.547 "write": true, 00:20:40.547 "unmap": true, 00:20:40.547 "flush": true, 00:20:40.547 "reset": true, 00:20:40.547 "nvme_admin": false, 00:20:40.547 "nvme_io": false, 00:20:40.547 "nvme_io_md": false, 00:20:40.547 "write_zeroes": true, 00:20:40.547 "zcopy": true, 00:20:40.547 "get_zone_info": false, 00:20:40.547 "zone_management": false, 00:20:40.547 "zone_append": false, 00:20:40.547 "compare": false, 00:20:40.547 "compare_and_write": false, 00:20:40.547 "abort": true, 00:20:40.547 "seek_hole": false, 00:20:40.547 "seek_data": false, 00:20:40.547 "copy": true, 00:20:40.547 "nvme_iov_md": false 00:20:40.547 }, 00:20:40.547 "memory_domains": [ 00:20:40.547 { 00:20:40.547 "dma_device_id": "system", 00:20:40.547 "dma_device_type": 1 00:20:40.547 }, 00:20:40.547 { 00:20:40.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.547 "dma_device_type": 2 00:20:40.547 } 00:20:40.547 ], 00:20:40.547 "driver_specific": {} 00:20:40.547 } 00:20:40.547 ] 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.547 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.548 "name": "Existed_Raid", 00:20:40.548 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:40.548 "strip_size_kb": 0, 00:20:40.548 "state": "configuring", 00:20:40.548 "raid_level": "raid1", 00:20:40.548 "superblock": true, 00:20:40.548 "num_base_bdevs": 3, 00:20:40.548 "num_base_bdevs_discovered": 2, 00:20:40.548 "num_base_bdevs_operational": 3, 00:20:40.548 "base_bdevs_list": [ 00:20:40.548 { 00:20:40.548 "name": "BaseBdev1", 00:20:40.548 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:40.548 "is_configured": true, 00:20:40.548 "data_offset": 2048, 00:20:40.548 "data_size": 63488 00:20:40.548 }, 00:20:40.548 { 00:20:40.548 "name": null, 00:20:40.548 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:40.548 "is_configured": false, 00:20:40.548 "data_offset": 0, 00:20:40.548 "data_size": 63488 00:20:40.548 }, 00:20:40.548 { 00:20:40.548 "name": "BaseBdev3", 00:20:40.548 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:40.548 "is_configured": true, 00:20:40.548 "data_offset": 2048, 00:20:40.548 "data_size": 63488 00:20:40.548 } 00:20:40.548 ] 00:20:40.548 }' 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.548 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.806 [2024-11-08 17:08:17.400515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.806 "name": "Existed_Raid", 00:20:40.806 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:40.806 "strip_size_kb": 0, 00:20:40.806 "state": "configuring", 00:20:40.806 "raid_level": "raid1", 00:20:40.806 "superblock": true, 00:20:40.806 "num_base_bdevs": 3, 00:20:40.806 "num_base_bdevs_discovered": 1, 00:20:40.806 "num_base_bdevs_operational": 3, 00:20:40.806 "base_bdevs_list": [ 00:20:40.806 { 00:20:40.806 "name": "BaseBdev1", 00:20:40.806 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:40.806 "is_configured": true, 00:20:40.806 "data_offset": 2048, 00:20:40.806 "data_size": 63488 00:20:40.806 }, 00:20:40.806 { 00:20:40.806 "name": null, 00:20:40.806 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:40.806 "is_configured": false, 00:20:40.806 "data_offset": 0, 00:20:40.806 "data_size": 63488 00:20:40.806 }, 00:20:40.806 { 00:20:40.806 "name": null, 00:20:40.806 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:40.806 "is_configured": false, 00:20:40.806 "data_offset": 0, 00:20:40.806 "data_size": 63488 00:20:40.806 } 00:20:40.806 ] 00:20:40.806 }' 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.806 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.064 [2024-11-08 17:08:17.748641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.064 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.065 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.065 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.322 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.322 "name": "Existed_Raid", 00:20:41.322 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:41.322 "strip_size_kb": 0, 00:20:41.322 "state": "configuring", 00:20:41.322 "raid_level": "raid1", 00:20:41.322 "superblock": true, 00:20:41.322 "num_base_bdevs": 3, 00:20:41.322 "num_base_bdevs_discovered": 2, 00:20:41.322 "num_base_bdevs_operational": 3, 00:20:41.322 "base_bdevs_list": [ 00:20:41.322 { 00:20:41.322 "name": "BaseBdev1", 00:20:41.322 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:41.322 "is_configured": true, 00:20:41.322 "data_offset": 2048, 00:20:41.322 "data_size": 63488 00:20:41.322 }, 00:20:41.322 { 00:20:41.322 "name": null, 00:20:41.322 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:41.322 "is_configured": false, 00:20:41.322 "data_offset": 0, 00:20:41.322 "data_size": 63488 00:20:41.322 }, 00:20:41.322 { 00:20:41.322 "name": "BaseBdev3", 00:20:41.322 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:41.322 "is_configured": true, 00:20:41.322 "data_offset": 2048, 00:20:41.322 "data_size": 63488 00:20:41.322 } 00:20:41.322 ] 00:20:41.322 }' 00:20:41.322 17:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.322 17:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.580 [2024-11-08 17:08:18.084739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.580 "name": "Existed_Raid", 00:20:41.580 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:41.580 "strip_size_kb": 0, 00:20:41.580 "state": "configuring", 00:20:41.580 "raid_level": "raid1", 00:20:41.580 "superblock": true, 00:20:41.580 "num_base_bdevs": 3, 00:20:41.580 "num_base_bdevs_discovered": 1, 00:20:41.580 "num_base_bdevs_operational": 3, 00:20:41.580 "base_bdevs_list": [ 00:20:41.580 { 00:20:41.580 "name": null, 00:20:41.580 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:41.580 "is_configured": false, 00:20:41.580 "data_offset": 0, 00:20:41.580 "data_size": 63488 00:20:41.580 }, 00:20:41.580 { 00:20:41.580 "name": null, 00:20:41.580 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:41.580 "is_configured": false, 00:20:41.580 "data_offset": 0, 00:20:41.580 "data_size": 63488 00:20:41.580 }, 00:20:41.580 { 00:20:41.580 "name": "BaseBdev3", 00:20:41.580 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:41.580 "is_configured": true, 00:20:41.580 "data_offset": 2048, 00:20:41.580 "data_size": 63488 00:20:41.580 } 00:20:41.580 ] 00:20:41.580 }' 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.580 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.838 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.838 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:41.838 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.838 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.839 [2024-11-08 17:08:18.510960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.839 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.096 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.097 "name": "Existed_Raid", 00:20:42.097 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:42.097 "strip_size_kb": 0, 00:20:42.097 "state": "configuring", 00:20:42.097 "raid_level": "raid1", 00:20:42.097 "superblock": true, 00:20:42.097 "num_base_bdevs": 3, 00:20:42.097 "num_base_bdevs_discovered": 2, 00:20:42.097 "num_base_bdevs_operational": 3, 00:20:42.097 "base_bdevs_list": [ 00:20:42.097 { 00:20:42.097 "name": null, 00:20:42.097 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:42.097 "is_configured": false, 00:20:42.097 "data_offset": 0, 00:20:42.097 "data_size": 63488 00:20:42.097 }, 00:20:42.097 { 00:20:42.097 "name": "BaseBdev2", 00:20:42.097 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:42.097 "is_configured": true, 00:20:42.097 "data_offset": 2048, 00:20:42.097 "data_size": 63488 00:20:42.097 }, 00:20:42.097 { 00:20:42.097 "name": "BaseBdev3", 00:20:42.097 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:42.097 "is_configured": true, 00:20:42.097 "data_offset": 2048, 00:20:42.097 "data_size": 63488 00:20:42.097 } 00:20:42.097 ] 00:20:42.097 }' 00:20:42.097 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.097 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eb112646-eb1b-4717-8f60-1b1e36eca47e 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 [2024-11-08 17:08:18.955513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:42.355 [2024-11-08 17:08:18.955737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:42.355 [2024-11-08 17:08:18.955749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:42.355 NewBaseBdev 00:20:42.355 [2024-11-08 17:08:18.956018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:42.355 [2024-11-08 17:08:18.956157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:42.355 [2024-11-08 17:08:18.956174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.355 [2024-11-08 17:08:18.956296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 [ 00:20:42.355 { 00:20:42.355 "name": "NewBaseBdev", 00:20:42.355 "aliases": [ 00:20:42.355 "eb112646-eb1b-4717-8f60-1b1e36eca47e" 00:20:42.355 ], 00:20:42.355 "product_name": "Malloc disk", 00:20:42.355 "block_size": 512, 00:20:42.355 "num_blocks": 65536, 00:20:42.355 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:42.355 "assigned_rate_limits": { 00:20:42.355 "rw_ios_per_sec": 0, 00:20:42.355 "rw_mbytes_per_sec": 0, 00:20:42.355 "r_mbytes_per_sec": 0, 00:20:42.355 "w_mbytes_per_sec": 0 00:20:42.355 }, 00:20:42.355 "claimed": true, 00:20:42.355 "claim_type": "exclusive_write", 00:20:42.355 "zoned": false, 00:20:42.355 "supported_io_types": { 00:20:42.355 "read": true, 00:20:42.355 "write": true, 00:20:42.355 "unmap": true, 00:20:42.355 "flush": true, 00:20:42.355 "reset": true, 00:20:42.355 "nvme_admin": false, 00:20:42.355 "nvme_io": false, 00:20:42.355 "nvme_io_md": false, 00:20:42.355 "write_zeroes": true, 00:20:42.355 "zcopy": true, 00:20:42.355 "get_zone_info": false, 00:20:42.355 "zone_management": false, 00:20:42.355 "zone_append": false, 00:20:42.355 "compare": false, 00:20:42.355 "compare_and_write": false, 00:20:42.355 "abort": true, 00:20:42.355 "seek_hole": false, 00:20:42.355 "seek_data": false, 00:20:42.355 "copy": true, 00:20:42.355 "nvme_iov_md": false 00:20:42.355 }, 00:20:42.355 "memory_domains": [ 00:20:42.355 { 00:20:42.355 "dma_device_id": "system", 00:20:42.355 "dma_device_type": 1 00:20:42.355 }, 00:20:42.355 { 00:20:42.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.355 "dma_device_type": 2 00:20:42.355 } 00:20:42.355 ], 00:20:42.355 "driver_specific": {} 00:20:42.355 } 00:20:42.355 ] 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.355 17:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.355 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.355 "name": "Existed_Raid", 00:20:42.355 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:42.355 "strip_size_kb": 0, 00:20:42.355 "state": "online", 00:20:42.356 "raid_level": "raid1", 00:20:42.356 "superblock": true, 00:20:42.356 "num_base_bdevs": 3, 00:20:42.356 "num_base_bdevs_discovered": 3, 00:20:42.356 "num_base_bdevs_operational": 3, 00:20:42.356 "base_bdevs_list": [ 00:20:42.356 { 00:20:42.356 "name": "NewBaseBdev", 00:20:42.356 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:42.356 "is_configured": true, 00:20:42.356 "data_offset": 2048, 00:20:42.356 "data_size": 63488 00:20:42.356 }, 00:20:42.356 { 00:20:42.356 "name": "BaseBdev2", 00:20:42.356 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:42.356 "is_configured": true, 00:20:42.356 "data_offset": 2048, 00:20:42.356 "data_size": 63488 00:20:42.356 }, 00:20:42.356 { 00:20:42.356 "name": "BaseBdev3", 00:20:42.356 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:42.356 "is_configured": true, 00:20:42.356 "data_offset": 2048, 00:20:42.356 "data_size": 63488 00:20:42.356 } 00:20:42.356 ] 00:20:42.356 }' 00:20:42.356 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.356 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.613 [2024-11-08 17:08:19.300016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.613 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.613 "name": "Existed_Raid", 00:20:42.613 "aliases": [ 00:20:42.613 "deb129a9-f5f7-4efc-9547-7957297e2560" 00:20:42.613 ], 00:20:42.613 "product_name": "Raid Volume", 00:20:42.613 "block_size": 512, 00:20:42.613 "num_blocks": 63488, 00:20:42.613 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:42.613 "assigned_rate_limits": { 00:20:42.613 "rw_ios_per_sec": 0, 00:20:42.613 "rw_mbytes_per_sec": 0, 00:20:42.613 "r_mbytes_per_sec": 0, 00:20:42.613 "w_mbytes_per_sec": 0 00:20:42.613 }, 00:20:42.613 "claimed": false, 00:20:42.613 "zoned": false, 00:20:42.613 "supported_io_types": { 00:20:42.613 "read": true, 00:20:42.613 "write": true, 00:20:42.613 "unmap": false, 00:20:42.613 "flush": false, 00:20:42.613 "reset": true, 00:20:42.613 "nvme_admin": false, 00:20:42.613 "nvme_io": false, 00:20:42.613 "nvme_io_md": false, 00:20:42.613 "write_zeroes": true, 00:20:42.613 "zcopy": false, 00:20:42.613 "get_zone_info": false, 00:20:42.613 "zone_management": false, 00:20:42.613 "zone_append": false, 00:20:42.613 "compare": false, 00:20:42.613 "compare_and_write": false, 00:20:42.613 "abort": false, 00:20:42.613 "seek_hole": false, 00:20:42.613 "seek_data": false, 00:20:42.613 "copy": false, 00:20:42.613 "nvme_iov_md": false 00:20:42.613 }, 00:20:42.613 "memory_domains": [ 00:20:42.613 { 00:20:42.613 "dma_device_id": "system", 00:20:42.614 "dma_device_type": 1 00:20:42.614 }, 00:20:42.614 { 00:20:42.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.614 "dma_device_type": 2 00:20:42.614 }, 00:20:42.614 { 00:20:42.614 "dma_device_id": "system", 00:20:42.614 "dma_device_type": 1 00:20:42.614 }, 00:20:42.614 { 00:20:42.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.614 "dma_device_type": 2 00:20:42.614 }, 00:20:42.614 { 00:20:42.614 "dma_device_id": "system", 00:20:42.614 "dma_device_type": 1 00:20:42.614 }, 00:20:42.614 { 00:20:42.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.614 "dma_device_type": 2 00:20:42.614 } 00:20:42.614 ], 00:20:42.614 "driver_specific": { 00:20:42.614 "raid": { 00:20:42.614 "uuid": "deb129a9-f5f7-4efc-9547-7957297e2560", 00:20:42.614 "strip_size_kb": 0, 00:20:42.614 "state": "online", 00:20:42.614 "raid_level": "raid1", 00:20:42.614 "superblock": true, 00:20:42.614 "num_base_bdevs": 3, 00:20:42.614 "num_base_bdevs_discovered": 3, 00:20:42.614 "num_base_bdevs_operational": 3, 00:20:42.614 "base_bdevs_list": [ 00:20:42.614 { 00:20:42.614 "name": "NewBaseBdev", 00:20:42.614 "uuid": "eb112646-eb1b-4717-8f60-1b1e36eca47e", 00:20:42.614 "is_configured": true, 00:20:42.614 "data_offset": 2048, 00:20:42.614 "data_size": 63488 00:20:42.614 }, 00:20:42.614 { 00:20:42.614 "name": "BaseBdev2", 00:20:42.614 "uuid": "ea044538-800e-43a2-8a99-3d6b95e1bb8e", 00:20:42.614 "is_configured": true, 00:20:42.614 "data_offset": 2048, 00:20:42.614 "data_size": 63488 00:20:42.614 }, 00:20:42.614 { 00:20:42.614 "name": "BaseBdev3", 00:20:42.614 "uuid": "275b56c8-e168-4247-b366-5c71398df951", 00:20:42.614 "is_configured": true, 00:20:42.614 "data_offset": 2048, 00:20:42.614 "data_size": 63488 00:20:42.614 } 00:20:42.614 ] 00:20:42.614 } 00:20:42.614 } 00:20:42.614 }' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:42.871 BaseBdev2 00:20:42.871 BaseBdev3' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.871 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.871 [2024-11-08 17:08:19.487693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:42.871 [2024-11-08 17:08:19.487849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.871 [2024-11-08 17:08:19.487977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.871 [2024-11-08 17:08:19.488295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.872 [2024-11-08 17:08:19.488433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66738 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66738 ']' 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66738 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66738 00:20:42.872 killing process with pid 66738 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66738' 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66738 00:20:42.872 [2024-11-08 17:08:19.517118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:42.872 17:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66738 00:20:43.128 [2024-11-08 17:08:19.711804] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:44.060 17:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:44.060 00:20:44.060 real 0m7.963s 00:20:44.060 user 0m12.591s 00:20:44.060 sys 0m1.341s 00:20:44.060 17:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:44.060 17:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.060 ************************************ 00:20:44.060 END TEST raid_state_function_test_sb 00:20:44.060 ************************************ 00:20:44.060 17:08:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:44.060 17:08:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:44.060 17:08:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:44.060 17:08:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.060 ************************************ 00:20:44.060 START TEST raid_superblock_test 00:20:44.060 ************************************ 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67336 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67336 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67336 ']' 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.060 17:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.060 [2024-11-08 17:08:20.598059] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:44.060 [2024-11-08 17:08:20.598200] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67336 ] 00:20:44.060 [2024-11-08 17:08:20.760562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.317 [2024-11-08 17:08:20.877803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.317 [2024-11-08 17:08:21.025160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.317 [2024-11-08 17:08:21.025235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.882 malloc1 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.882 [2024-11-08 17:08:21.506554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:44.882 [2024-11-08 17:08:21.506732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.882 [2024-11-08 17:08:21.506794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:44.882 [2024-11-08 17:08:21.506931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.882 [2024-11-08 17:08:21.509269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.882 [2024-11-08 17:08:21.509378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:44.882 pt1 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.882 malloc2 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.882 [2024-11-08 17:08:21.545083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:44.882 [2024-11-08 17:08:21.545225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.882 [2024-11-08 17:08:21.545269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:44.882 [2024-11-08 17:08:21.545317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.882 [2024-11-08 17:08:21.547635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.882 pt2 00:20:44.882 [2024-11-08 17:08:21.547767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:44.882 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:44.883 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:44.883 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:44.883 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:44.883 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:44.883 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:44.883 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.883 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.141 malloc3 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.141 [2024-11-08 17:08:21.610807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:45.141 [2024-11-08 17:08:21.610969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.141 [2024-11-08 17:08:21.611020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:45.141 [2024-11-08 17:08:21.611075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.141 [2024-11-08 17:08:21.613366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.141 [2024-11-08 17:08:21.613474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:45.141 pt3 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.141 [2024-11-08 17:08:21.622875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:45.141 [2024-11-08 17:08:21.624926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:45.141 [2024-11-08 17:08:21.625071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:45.141 [2024-11-08 17:08:21.625266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:45.141 [2024-11-08 17:08:21.625339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:45.141 [2024-11-08 17:08:21.625643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:45.141 [2024-11-08 17:08:21.625914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:45.141 [2024-11-08 17:08:21.625950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:45.141 [2024-11-08 17:08:21.626185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.141 "name": "raid_bdev1", 00:20:45.141 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:45.141 "strip_size_kb": 0, 00:20:45.141 "state": "online", 00:20:45.141 "raid_level": "raid1", 00:20:45.141 "superblock": true, 00:20:45.141 "num_base_bdevs": 3, 00:20:45.141 "num_base_bdevs_discovered": 3, 00:20:45.141 "num_base_bdevs_operational": 3, 00:20:45.141 "base_bdevs_list": [ 00:20:45.141 { 00:20:45.141 "name": "pt1", 00:20:45.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:45.141 "is_configured": true, 00:20:45.141 "data_offset": 2048, 00:20:45.141 "data_size": 63488 00:20:45.141 }, 00:20:45.141 { 00:20:45.141 "name": "pt2", 00:20:45.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.141 "is_configured": true, 00:20:45.141 "data_offset": 2048, 00:20:45.141 "data_size": 63488 00:20:45.141 }, 00:20:45.141 { 00:20:45.141 "name": "pt3", 00:20:45.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:45.141 "is_configured": true, 00:20:45.141 "data_offset": 2048, 00:20:45.141 "data_size": 63488 00:20:45.141 } 00:20:45.141 ] 00:20:45.141 }' 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.141 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 [2024-11-08 17:08:21.947254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:45.400 "name": "raid_bdev1", 00:20:45.400 "aliases": [ 00:20:45.400 "37b00608-3318-44cb-b538-7cdc14afeb8b" 00:20:45.400 ], 00:20:45.400 "product_name": "Raid Volume", 00:20:45.400 "block_size": 512, 00:20:45.400 "num_blocks": 63488, 00:20:45.400 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:45.400 "assigned_rate_limits": { 00:20:45.400 "rw_ios_per_sec": 0, 00:20:45.400 "rw_mbytes_per_sec": 0, 00:20:45.400 "r_mbytes_per_sec": 0, 00:20:45.400 "w_mbytes_per_sec": 0 00:20:45.400 }, 00:20:45.400 "claimed": false, 00:20:45.400 "zoned": false, 00:20:45.400 "supported_io_types": { 00:20:45.400 "read": true, 00:20:45.400 "write": true, 00:20:45.400 "unmap": false, 00:20:45.400 "flush": false, 00:20:45.400 "reset": true, 00:20:45.400 "nvme_admin": false, 00:20:45.400 "nvme_io": false, 00:20:45.400 "nvme_io_md": false, 00:20:45.400 "write_zeroes": true, 00:20:45.400 "zcopy": false, 00:20:45.400 "get_zone_info": false, 00:20:45.400 "zone_management": false, 00:20:45.400 "zone_append": false, 00:20:45.400 "compare": false, 00:20:45.400 "compare_and_write": false, 00:20:45.400 "abort": false, 00:20:45.400 "seek_hole": false, 00:20:45.400 "seek_data": false, 00:20:45.400 "copy": false, 00:20:45.400 "nvme_iov_md": false 00:20:45.400 }, 00:20:45.400 "memory_domains": [ 00:20:45.400 { 00:20:45.400 "dma_device_id": "system", 00:20:45.400 "dma_device_type": 1 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.400 "dma_device_type": 2 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "dma_device_id": "system", 00:20:45.400 "dma_device_type": 1 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.400 "dma_device_type": 2 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "dma_device_id": "system", 00:20:45.400 "dma_device_type": 1 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.400 "dma_device_type": 2 00:20:45.400 } 00:20:45.400 ], 00:20:45.400 "driver_specific": { 00:20:45.400 "raid": { 00:20:45.400 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:45.400 "strip_size_kb": 0, 00:20:45.400 "state": "online", 00:20:45.400 "raid_level": "raid1", 00:20:45.400 "superblock": true, 00:20:45.400 "num_base_bdevs": 3, 00:20:45.400 "num_base_bdevs_discovered": 3, 00:20:45.400 "num_base_bdevs_operational": 3, 00:20:45.400 "base_bdevs_list": [ 00:20:45.400 { 00:20:45.400 "name": "pt1", 00:20:45.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:45.400 "is_configured": true, 00:20:45.400 "data_offset": 2048, 00:20:45.400 "data_size": 63488 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "name": "pt2", 00:20:45.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.400 "is_configured": true, 00:20:45.400 "data_offset": 2048, 00:20:45.400 "data_size": 63488 00:20:45.400 }, 00:20:45.400 { 00:20:45.400 "name": "pt3", 00:20:45.400 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:45.400 "is_configured": true, 00:20:45.400 "data_offset": 2048, 00:20:45.400 "data_size": 63488 00:20:45.400 } 00:20:45.400 ] 00:20:45.400 } 00:20:45.400 } 00:20:45.400 }' 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:45.400 pt2 00:20:45.400 pt3' 00:20:45.400 17:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.400 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.675 [2024-11-08 17:08:22.131257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=37b00608-3318-44cb-b538-7cdc14afeb8b 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 37b00608-3318-44cb-b538-7cdc14afeb8b ']' 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.675 [2024-11-08 17:08:22.154950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.675 [2024-11-08 17:08:22.155076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.675 [2024-11-08 17:08:22.155210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.675 [2024-11-08 17:08:22.155316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.675 [2024-11-08 17:08:22.155388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:45.675 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 [2024-11-08 17:08:22.267043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:45.676 [2024-11-08 17:08:22.269156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:45.676 [2024-11-08 17:08:22.269211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:45.676 [2024-11-08 17:08:22.269264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:45.676 [2024-11-08 17:08:22.269321] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:45.676 [2024-11-08 17:08:22.269342] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:45.676 [2024-11-08 17:08:22.269360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.676 [2024-11-08 17:08:22.269370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:45.676 request: 00:20:45.676 { 00:20:45.676 "name": "raid_bdev1", 00:20:45.676 "raid_level": "raid1", 00:20:45.676 "base_bdevs": [ 00:20:45.676 "malloc1", 00:20:45.676 "malloc2", 00:20:45.676 "malloc3" 00:20:45.676 ], 00:20:45.676 "superblock": false, 00:20:45.676 "method": "bdev_raid_create", 00:20:45.676 "req_id": 1 00:20:45.676 } 00:20:45.676 Got JSON-RPC error response 00:20:45.676 response: 00:20:45.676 { 00:20:45.676 "code": -17, 00:20:45.676 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:45.676 } 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.676 [2024-11-08 17:08:22.315004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:45.676 [2024-11-08 17:08:22.315178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.676 [2024-11-08 17:08:22.315247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:45.676 [2024-11-08 17:08:22.315354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.676 [2024-11-08 17:08:22.317889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.676 [2024-11-08 17:08:22.317999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:45.676 [2024-11-08 17:08:22.318142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:45.676 [2024-11-08 17:08:22.318240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:45.676 pt1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.676 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.677 "name": "raid_bdev1", 00:20:45.677 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:45.677 "strip_size_kb": 0, 00:20:45.677 "state": "configuring", 00:20:45.677 "raid_level": "raid1", 00:20:45.677 "superblock": true, 00:20:45.677 "num_base_bdevs": 3, 00:20:45.677 "num_base_bdevs_discovered": 1, 00:20:45.677 "num_base_bdevs_operational": 3, 00:20:45.677 "base_bdevs_list": [ 00:20:45.677 { 00:20:45.677 "name": "pt1", 00:20:45.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:45.677 "is_configured": true, 00:20:45.677 "data_offset": 2048, 00:20:45.677 "data_size": 63488 00:20:45.677 }, 00:20:45.677 { 00:20:45.677 "name": null, 00:20:45.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.677 "is_configured": false, 00:20:45.677 "data_offset": 2048, 00:20:45.677 "data_size": 63488 00:20:45.677 }, 00:20:45.677 { 00:20:45.677 "name": null, 00:20:45.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:45.677 "is_configured": false, 00:20:45.677 "data_offset": 2048, 00:20:45.677 "data_size": 63488 00:20:45.677 } 00:20:45.677 ] 00:20:45.677 }' 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.677 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 [2024-11-08 17:08:22.667087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:46.243 [2024-11-08 17:08:22.667251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.243 [2024-11-08 17:08:22.667296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:46.243 [2024-11-08 17:08:22.667308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.243 [2024-11-08 17:08:22.667793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.243 [2024-11-08 17:08:22.667809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:46.243 [2024-11-08 17:08:22.667898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:46.243 [2024-11-08 17:08:22.667919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:46.243 pt2 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 [2024-11-08 17:08:22.675091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.243 "name": "raid_bdev1", 00:20:46.243 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:46.243 "strip_size_kb": 0, 00:20:46.243 "state": "configuring", 00:20:46.243 "raid_level": "raid1", 00:20:46.243 "superblock": true, 00:20:46.243 "num_base_bdevs": 3, 00:20:46.243 "num_base_bdevs_discovered": 1, 00:20:46.243 "num_base_bdevs_operational": 3, 00:20:46.243 "base_bdevs_list": [ 00:20:46.243 { 00:20:46.243 "name": "pt1", 00:20:46.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:46.243 "is_configured": true, 00:20:46.243 "data_offset": 2048, 00:20:46.243 "data_size": 63488 00:20:46.243 }, 00:20:46.243 { 00:20:46.243 "name": null, 00:20:46.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.243 "is_configured": false, 00:20:46.243 "data_offset": 0, 00:20:46.243 "data_size": 63488 00:20:46.243 }, 00:20:46.243 { 00:20:46.243 "name": null, 00:20:46.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:46.243 "is_configured": false, 00:20:46.243 "data_offset": 2048, 00:20:46.243 "data_size": 63488 00:20:46.243 } 00:20:46.243 ] 00:20:46.243 }' 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.243 17:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.501 [2024-11-08 17:08:23.027146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:46.501 [2024-11-08 17:08:23.027315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.501 [2024-11-08 17:08:23.027355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:46.501 [2024-11-08 17:08:23.027410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.501 [2024-11-08 17:08:23.027925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.501 [2024-11-08 17:08:23.027956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:46.501 [2024-11-08 17:08:23.028035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:46.501 [2024-11-08 17:08:23.028071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:46.501 pt2 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.501 [2024-11-08 17:08:23.035127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:46.501 [2024-11-08 17:08:23.035254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.501 [2024-11-08 17:08:23.035294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:46.501 [2024-11-08 17:08:23.035352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.501 [2024-11-08 17:08:23.035795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.501 [2024-11-08 17:08:23.035927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:46.501 [2024-11-08 17:08:23.036040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:46.501 [2024-11-08 17:08:23.036080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:46.501 [2024-11-08 17:08:23.036227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:46.501 [2024-11-08 17:08:23.036256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:46.501 [2024-11-08 17:08:23.036507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:46.501 [2024-11-08 17:08:23.036717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:46.501 [2024-11-08 17:08:23.036804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:46.501 [2024-11-08 17:08:23.037021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.501 pt3 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.501 "name": "raid_bdev1", 00:20:46.501 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:46.501 "strip_size_kb": 0, 00:20:46.501 "state": "online", 00:20:46.501 "raid_level": "raid1", 00:20:46.501 "superblock": true, 00:20:46.501 "num_base_bdevs": 3, 00:20:46.501 "num_base_bdevs_discovered": 3, 00:20:46.501 "num_base_bdevs_operational": 3, 00:20:46.501 "base_bdevs_list": [ 00:20:46.501 { 00:20:46.501 "name": "pt1", 00:20:46.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:46.501 "is_configured": true, 00:20:46.501 "data_offset": 2048, 00:20:46.501 "data_size": 63488 00:20:46.501 }, 00:20:46.501 { 00:20:46.501 "name": "pt2", 00:20:46.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.501 "is_configured": true, 00:20:46.501 "data_offset": 2048, 00:20:46.501 "data_size": 63488 00:20:46.501 }, 00:20:46.501 { 00:20:46.501 "name": "pt3", 00:20:46.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:46.501 "is_configured": true, 00:20:46.501 "data_offset": 2048, 00:20:46.501 "data_size": 63488 00:20:46.501 } 00:20:46.501 ] 00:20:46.501 }' 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.501 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.759 [2024-11-08 17:08:23.383617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:46.759 "name": "raid_bdev1", 00:20:46.759 "aliases": [ 00:20:46.759 "37b00608-3318-44cb-b538-7cdc14afeb8b" 00:20:46.759 ], 00:20:46.759 "product_name": "Raid Volume", 00:20:46.759 "block_size": 512, 00:20:46.759 "num_blocks": 63488, 00:20:46.759 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:46.759 "assigned_rate_limits": { 00:20:46.759 "rw_ios_per_sec": 0, 00:20:46.759 "rw_mbytes_per_sec": 0, 00:20:46.759 "r_mbytes_per_sec": 0, 00:20:46.759 "w_mbytes_per_sec": 0 00:20:46.759 }, 00:20:46.759 "claimed": false, 00:20:46.759 "zoned": false, 00:20:46.759 "supported_io_types": { 00:20:46.759 "read": true, 00:20:46.759 "write": true, 00:20:46.759 "unmap": false, 00:20:46.759 "flush": false, 00:20:46.759 "reset": true, 00:20:46.759 "nvme_admin": false, 00:20:46.759 "nvme_io": false, 00:20:46.759 "nvme_io_md": false, 00:20:46.759 "write_zeroes": true, 00:20:46.759 "zcopy": false, 00:20:46.759 "get_zone_info": false, 00:20:46.759 "zone_management": false, 00:20:46.759 "zone_append": false, 00:20:46.759 "compare": false, 00:20:46.759 "compare_and_write": false, 00:20:46.759 "abort": false, 00:20:46.759 "seek_hole": false, 00:20:46.759 "seek_data": false, 00:20:46.759 "copy": false, 00:20:46.759 "nvme_iov_md": false 00:20:46.759 }, 00:20:46.759 "memory_domains": [ 00:20:46.759 { 00:20:46.759 "dma_device_id": "system", 00:20:46.759 "dma_device_type": 1 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.759 "dma_device_type": 2 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "dma_device_id": "system", 00:20:46.759 "dma_device_type": 1 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.759 "dma_device_type": 2 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "dma_device_id": "system", 00:20:46.759 "dma_device_type": 1 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.759 "dma_device_type": 2 00:20:46.759 } 00:20:46.759 ], 00:20:46.759 "driver_specific": { 00:20:46.759 "raid": { 00:20:46.759 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:46.759 "strip_size_kb": 0, 00:20:46.759 "state": "online", 00:20:46.759 "raid_level": "raid1", 00:20:46.759 "superblock": true, 00:20:46.759 "num_base_bdevs": 3, 00:20:46.759 "num_base_bdevs_discovered": 3, 00:20:46.759 "num_base_bdevs_operational": 3, 00:20:46.759 "base_bdevs_list": [ 00:20:46.759 { 00:20:46.759 "name": "pt1", 00:20:46.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:46.759 "is_configured": true, 00:20:46.759 "data_offset": 2048, 00:20:46.759 "data_size": 63488 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "name": "pt2", 00:20:46.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.759 "is_configured": true, 00:20:46.759 "data_offset": 2048, 00:20:46.759 "data_size": 63488 00:20:46.759 }, 00:20:46.759 { 00:20:46.759 "name": "pt3", 00:20:46.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:46.759 "is_configured": true, 00:20:46.759 "data_offset": 2048, 00:20:46.759 "data_size": 63488 00:20:46.759 } 00:20:46.759 ] 00:20:46.759 } 00:20:46.759 } 00:20:46.759 }' 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:46.759 pt2 00:20:46.759 pt3' 00:20:46.759 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.019 [2024-11-08 17:08:23.587605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 37b00608-3318-44cb-b538-7cdc14afeb8b '!=' 37b00608-3318-44cb-b538-7cdc14afeb8b ']' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.019 [2024-11-08 17:08:23.619359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.019 "name": "raid_bdev1", 00:20:47.019 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:47.019 "strip_size_kb": 0, 00:20:47.019 "state": "online", 00:20:47.019 "raid_level": "raid1", 00:20:47.019 "superblock": true, 00:20:47.019 "num_base_bdevs": 3, 00:20:47.019 "num_base_bdevs_discovered": 2, 00:20:47.019 "num_base_bdevs_operational": 2, 00:20:47.019 "base_bdevs_list": [ 00:20:47.019 { 00:20:47.019 "name": null, 00:20:47.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.019 "is_configured": false, 00:20:47.019 "data_offset": 0, 00:20:47.019 "data_size": 63488 00:20:47.019 }, 00:20:47.019 { 00:20:47.019 "name": "pt2", 00:20:47.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.019 "is_configured": true, 00:20:47.019 "data_offset": 2048, 00:20:47.019 "data_size": 63488 00:20:47.019 }, 00:20:47.019 { 00:20:47.019 "name": "pt3", 00:20:47.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:47.019 "is_configured": true, 00:20:47.019 "data_offset": 2048, 00:20:47.019 "data_size": 63488 00:20:47.019 } 00:20:47.019 ] 00:20:47.019 }' 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.019 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 [2024-11-08 17:08:23.939379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.278 [2024-11-08 17:08:23.939512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.278 [2024-11-08 17:08:23.939643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.278 [2024-11-08 17:08:23.939712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.278 [2024-11-08 17:08:23.939727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.278 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.536 17:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.536 [2024-11-08 17:08:23.999367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:47.536 [2024-11-08 17:08:23.999515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.536 [2024-11-08 17:08:23.999553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:47.536 [2024-11-08 17:08:23.999601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.536 [2024-11-08 17:08:24.002027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.536 [2024-11-08 17:08:24.002145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:47.536 [2024-11-08 17:08:24.002282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:47.536 [2024-11-08 17:08:24.002335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:47.536 pt2 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.536 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.537 "name": "raid_bdev1", 00:20:47.537 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:47.537 "strip_size_kb": 0, 00:20:47.537 "state": "configuring", 00:20:47.537 "raid_level": "raid1", 00:20:47.537 "superblock": true, 00:20:47.537 "num_base_bdevs": 3, 00:20:47.537 "num_base_bdevs_discovered": 1, 00:20:47.537 "num_base_bdevs_operational": 2, 00:20:47.537 "base_bdevs_list": [ 00:20:47.537 { 00:20:47.537 "name": null, 00:20:47.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.537 "is_configured": false, 00:20:47.537 "data_offset": 2048, 00:20:47.537 "data_size": 63488 00:20:47.537 }, 00:20:47.537 { 00:20:47.537 "name": "pt2", 00:20:47.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.537 "is_configured": true, 00:20:47.537 "data_offset": 2048, 00:20:47.537 "data_size": 63488 00:20:47.537 }, 00:20:47.537 { 00:20:47.537 "name": null, 00:20:47.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:47.537 "is_configured": false, 00:20:47.537 "data_offset": 2048, 00:20:47.537 "data_size": 63488 00:20:47.537 } 00:20:47.537 ] 00:20:47.537 }' 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.537 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.795 [2024-11-08 17:08:24.335470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:47.795 [2024-11-08 17:08:24.335655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.795 [2024-11-08 17:08:24.335735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:47.795 [2024-11-08 17:08:24.335862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.795 [2024-11-08 17:08:24.336363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.795 [2024-11-08 17:08:24.336389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:47.795 [2024-11-08 17:08:24.336481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:47.795 [2024-11-08 17:08:24.336508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:47.795 [2024-11-08 17:08:24.336618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:47.795 [2024-11-08 17:08:24.336630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:47.795 [2024-11-08 17:08:24.336899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:47.795 [2024-11-08 17:08:24.337042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:47.795 [2024-11-08 17:08:24.337050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:47.795 [2024-11-08 17:08:24.337183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.795 pt3 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.795 "name": "raid_bdev1", 00:20:47.795 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:47.795 "strip_size_kb": 0, 00:20:47.795 "state": "online", 00:20:47.795 "raid_level": "raid1", 00:20:47.795 "superblock": true, 00:20:47.795 "num_base_bdevs": 3, 00:20:47.795 "num_base_bdevs_discovered": 2, 00:20:47.795 "num_base_bdevs_operational": 2, 00:20:47.795 "base_bdevs_list": [ 00:20:47.795 { 00:20:47.795 "name": null, 00:20:47.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.795 "is_configured": false, 00:20:47.795 "data_offset": 2048, 00:20:47.795 "data_size": 63488 00:20:47.795 }, 00:20:47.795 { 00:20:47.795 "name": "pt2", 00:20:47.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.795 "is_configured": true, 00:20:47.795 "data_offset": 2048, 00:20:47.795 "data_size": 63488 00:20:47.795 }, 00:20:47.795 { 00:20:47.795 "name": "pt3", 00:20:47.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:47.795 "is_configured": true, 00:20:47.795 "data_offset": 2048, 00:20:47.795 "data_size": 63488 00:20:47.795 } 00:20:47.795 ] 00:20:47.795 }' 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.795 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.054 [2024-11-08 17:08:24.655515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:48.054 [2024-11-08 17:08:24.655547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.054 [2024-11-08 17:08:24.655625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.054 [2024-11-08 17:08:24.655696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.054 [2024-11-08 17:08:24.655706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.054 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.054 [2024-11-08 17:08:24.707535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:48.054 [2024-11-08 17:08:24.707597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.054 [2024-11-08 17:08:24.707617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:48.054 [2024-11-08 17:08:24.707627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.054 [2024-11-08 17:08:24.710020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.054 [2024-11-08 17:08:24.710055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:48.054 [2024-11-08 17:08:24.710139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:48.054 [2024-11-08 17:08:24.710183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:48.054 [2024-11-08 17:08:24.710307] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:48.055 [2024-11-08 17:08:24.710318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:48.055 [2024-11-08 17:08:24.710336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:48.055 [2024-11-08 17:08:24.710384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:48.055 pt1 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.055 "name": "raid_bdev1", 00:20:48.055 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:48.055 "strip_size_kb": 0, 00:20:48.055 "state": "configuring", 00:20:48.055 "raid_level": "raid1", 00:20:48.055 "superblock": true, 00:20:48.055 "num_base_bdevs": 3, 00:20:48.055 "num_base_bdevs_discovered": 1, 00:20:48.055 "num_base_bdevs_operational": 2, 00:20:48.055 "base_bdevs_list": [ 00:20:48.055 { 00:20:48.055 "name": null, 00:20:48.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.055 "is_configured": false, 00:20:48.055 "data_offset": 2048, 00:20:48.055 "data_size": 63488 00:20:48.055 }, 00:20:48.055 { 00:20:48.055 "name": "pt2", 00:20:48.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:48.055 "is_configured": true, 00:20:48.055 "data_offset": 2048, 00:20:48.055 "data_size": 63488 00:20:48.055 }, 00:20:48.055 { 00:20:48.055 "name": null, 00:20:48.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:48.055 "is_configured": false, 00:20:48.055 "data_offset": 2048, 00:20:48.055 "data_size": 63488 00:20:48.055 } 00:20:48.055 ] 00:20:48.055 }' 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.055 17:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 [2024-11-08 17:08:25.067623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:48.620 [2024-11-08 17:08:25.067685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.620 [2024-11-08 17:08:25.067706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:48.620 [2024-11-08 17:08:25.067716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.620 [2024-11-08 17:08:25.068188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.620 [2024-11-08 17:08:25.068203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:48.620 [2024-11-08 17:08:25.068285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:48.620 [2024-11-08 17:08:25.068324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:48.620 [2024-11-08 17:08:25.068442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:48.620 [2024-11-08 17:08:25.068452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:48.620 [2024-11-08 17:08:25.068705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:48.620 [2024-11-08 17:08:25.068866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:48.620 [2024-11-08 17:08:25.068878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:48.620 [2024-11-08 17:08:25.069009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.620 pt3 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.620 "name": "raid_bdev1", 00:20:48.620 "uuid": "37b00608-3318-44cb-b538-7cdc14afeb8b", 00:20:48.620 "strip_size_kb": 0, 00:20:48.620 "state": "online", 00:20:48.620 "raid_level": "raid1", 00:20:48.620 "superblock": true, 00:20:48.620 "num_base_bdevs": 3, 00:20:48.620 "num_base_bdevs_discovered": 2, 00:20:48.620 "num_base_bdevs_operational": 2, 00:20:48.620 "base_bdevs_list": [ 00:20:48.620 { 00:20:48.620 "name": null, 00:20:48.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.620 "is_configured": false, 00:20:48.620 "data_offset": 2048, 00:20:48.620 "data_size": 63488 00:20:48.620 }, 00:20:48.620 { 00:20:48.620 "name": "pt2", 00:20:48.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:48.620 "is_configured": true, 00:20:48.620 "data_offset": 2048, 00:20:48.620 "data_size": 63488 00:20:48.620 }, 00:20:48.620 { 00:20:48.620 "name": "pt3", 00:20:48.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:48.620 "is_configured": true, 00:20:48.620 "data_offset": 2048, 00:20:48.620 "data_size": 63488 00:20:48.620 } 00:20:48.620 ] 00:20:48.620 }' 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.620 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:48.878 [2024-11-08 17:08:25.440018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 37b00608-3318-44cb-b538-7cdc14afeb8b '!=' 37b00608-3318-44cb-b538-7cdc14afeb8b ']' 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67336 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67336 ']' 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67336 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67336 00:20:48.878 killing process with pid 67336 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67336' 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67336 00:20:48.878 [2024-11-08 17:08:25.507530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.878 17:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67336 00:20:48.878 [2024-11-08 17:08:25.507634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.878 [2024-11-08 17:08:25.507710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.878 [2024-11-08 17:08:25.507724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:49.136 [2024-11-08 17:08:25.706647] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:50.067 17:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:50.067 00:20:50.067 real 0m5.921s 00:20:50.067 user 0m9.197s 00:20:50.067 sys 0m0.984s 00:20:50.067 17:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:50.067 17:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.067 ************************************ 00:20:50.067 END TEST raid_superblock_test 00:20:50.067 ************************************ 00:20:50.067 17:08:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:20:50.067 17:08:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:50.067 17:08:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:50.067 17:08:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:50.067 ************************************ 00:20:50.068 START TEST raid_read_error_test 00:20:50.068 ************************************ 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xMYTqZycv9 00:20:50.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67760 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67760 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67760 ']' 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:50.068 17:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.068 [2024-11-08 17:08:26.594502] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:50.068 [2024-11-08 17:08:26.594645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67760 ] 00:20:50.068 [2024-11-08 17:08:26.755744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.326 [2024-11-08 17:08:26.938538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.584 [2024-11-08 17:08:27.090169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.584 [2024-11-08 17:08:27.090443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.841 BaseBdev1_malloc 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.841 true 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.841 [2024-11-08 17:08:27.493288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:50.841 [2024-11-08 17:08:27.493465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.841 [2024-11-08 17:08:27.493496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:50.841 [2024-11-08 17:08:27.493508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.841 [2024-11-08 17:08:27.495830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.841 [2024-11-08 17:08:27.495867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:50.841 BaseBdev1 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.841 BaseBdev2_malloc 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.841 true 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.841 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.841 [2024-11-08 17:08:27.539445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:50.841 [2024-11-08 17:08:27.539603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.842 [2024-11-08 17:08:27.539627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:50.842 [2024-11-08 17:08:27.539638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.842 [2024-11-08 17:08:27.541927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.842 [2024-11-08 17:08:27.541963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:50.842 BaseBdev2 00:20:50.842 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.842 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:50.842 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:50.842 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.842 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 BaseBdev3_malloc 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 true 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 [2024-11-08 17:08:27.599188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:51.100 [2024-11-08 17:08:27.599244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.100 [2024-11-08 17:08:27.599262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:51.100 [2024-11-08 17:08:27.599273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.100 [2024-11-08 17:08:27.601543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.100 [2024-11-08 17:08:27.601583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:51.100 BaseBdev3 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 [2024-11-08 17:08:27.607267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:51.100 [2024-11-08 17:08:27.609335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:51.100 [2024-11-08 17:08:27.609497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:51.100 [2024-11-08 17:08:27.609800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:51.100 [2024-11-08 17:08:27.609871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:51.100 [2024-11-08 17:08:27.610150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:51.100 [2024-11-08 17:08:27.610316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:51.100 [2024-11-08 17:08:27.610328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:51.100 [2024-11-08 17:08:27.610480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.100 "name": "raid_bdev1", 00:20:51.100 "uuid": "93b4fec7-241d-44d9-bbc3-837f31ed9859", 00:20:51.100 "strip_size_kb": 0, 00:20:51.100 "state": "online", 00:20:51.100 "raid_level": "raid1", 00:20:51.100 "superblock": true, 00:20:51.100 "num_base_bdevs": 3, 00:20:51.100 "num_base_bdevs_discovered": 3, 00:20:51.100 "num_base_bdevs_operational": 3, 00:20:51.100 "base_bdevs_list": [ 00:20:51.100 { 00:20:51.100 "name": "BaseBdev1", 00:20:51.100 "uuid": "816fb31b-7aa5-504a-ad8f-18c514c70571", 00:20:51.100 "is_configured": true, 00:20:51.100 "data_offset": 2048, 00:20:51.100 "data_size": 63488 00:20:51.100 }, 00:20:51.100 { 00:20:51.100 "name": "BaseBdev2", 00:20:51.100 "uuid": "068cea25-a07f-5f32-825a-c9dd6d158216", 00:20:51.100 "is_configured": true, 00:20:51.100 "data_offset": 2048, 00:20:51.100 "data_size": 63488 00:20:51.100 }, 00:20:51.100 { 00:20:51.100 "name": "BaseBdev3", 00:20:51.100 "uuid": "48cae33d-1059-507f-9c3c-b257cfb9cea1", 00:20:51.100 "is_configured": true, 00:20:51.100 "data_offset": 2048, 00:20:51.100 "data_size": 63488 00:20:51.100 } 00:20:51.100 ] 00:20:51.100 }' 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.100 17:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.359 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:51.359 17:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:51.359 [2024-11-08 17:08:28.044373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.292 "name": "raid_bdev1", 00:20:52.292 "uuid": "93b4fec7-241d-44d9-bbc3-837f31ed9859", 00:20:52.292 "strip_size_kb": 0, 00:20:52.292 "state": "online", 00:20:52.292 "raid_level": "raid1", 00:20:52.292 "superblock": true, 00:20:52.292 "num_base_bdevs": 3, 00:20:52.292 "num_base_bdevs_discovered": 3, 00:20:52.292 "num_base_bdevs_operational": 3, 00:20:52.292 "base_bdevs_list": [ 00:20:52.292 { 00:20:52.292 "name": "BaseBdev1", 00:20:52.292 "uuid": "816fb31b-7aa5-504a-ad8f-18c514c70571", 00:20:52.292 "is_configured": true, 00:20:52.292 "data_offset": 2048, 00:20:52.292 "data_size": 63488 00:20:52.292 }, 00:20:52.292 { 00:20:52.292 "name": "BaseBdev2", 00:20:52.292 "uuid": "068cea25-a07f-5f32-825a-c9dd6d158216", 00:20:52.292 "is_configured": true, 00:20:52.292 "data_offset": 2048, 00:20:52.292 "data_size": 63488 00:20:52.292 }, 00:20:52.292 { 00:20:52.292 "name": "BaseBdev3", 00:20:52.292 "uuid": "48cae33d-1059-507f-9c3c-b257cfb9cea1", 00:20:52.292 "is_configured": true, 00:20:52.292 "data_offset": 2048, 00:20:52.292 "data_size": 63488 00:20:52.292 } 00:20:52.292 ] 00:20:52.292 }' 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.292 17:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.871 [2024-11-08 17:08:29.278836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.871 [2024-11-08 17:08:29.278870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.871 [2024-11-08 17:08:29.282190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.871 [2024-11-08 17:08:29.282372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.871 [2024-11-08 17:08:29.282532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.871 [2024-11-08 17:08:29.282545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:52.871 { 00:20:52.871 "results": [ 00:20:52.871 { 00:20:52.871 "job": "raid_bdev1", 00:20:52.871 "core_mask": "0x1", 00:20:52.871 "workload": "randrw", 00:20:52.871 "percentage": 50, 00:20:52.871 "status": "finished", 00:20:52.871 "queue_depth": 1, 00:20:52.871 "io_size": 131072, 00:20:52.871 "runtime": 1.232382, 00:20:52.871 "iops": 12148.830476264664, 00:20:52.871 "mibps": 1518.603809533083, 00:20:52.871 "io_failed": 0, 00:20:52.871 "io_timeout": 0, 00:20:52.871 "avg_latency_us": 78.99854908649992, 00:20:52.871 "min_latency_us": 29.735384615384614, 00:20:52.871 "max_latency_us": 1739.2246153846154 00:20:52.871 } 00:20:52.871 ], 00:20:52.871 "core_count": 1 00:20:52.871 } 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67760 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67760 ']' 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67760 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67760 00:20:52.871 killing process with pid 67760 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67760' 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67760 00:20:52.871 [2024-11-08 17:08:29.313474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:52.871 17:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67760 00:20:52.871 [2024-11-08 17:08:29.466506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xMYTqZycv9 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:53.803 ************************************ 00:20:53.803 END TEST raid_read_error_test 00:20:53.803 ************************************ 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:53.803 00:20:53.803 real 0m3.750s 00:20:53.803 user 0m4.412s 00:20:53.803 sys 0m0.441s 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:53.803 17:08:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.803 17:08:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:20:53.803 17:08:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:53.803 17:08:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:53.803 17:08:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:53.803 ************************************ 00:20:53.803 START TEST raid_write_error_test 00:20:53.803 ************************************ 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:53.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.H132lR8o5q 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67900 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67900 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67900 ']' 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.803 17:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:53.803 [2024-11-08 17:08:30.417194] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:53.803 [2024-11-08 17:08:30.417333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67900 ] 00:20:54.061 [2024-11-08 17:08:30.580157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.061 [2024-11-08 17:08:30.695895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.319 [2024-11-08 17:08:30.845289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.319 [2024-11-08 17:08:30.845362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.577 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:54.577 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:20:54.577 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:54.577 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:54.577 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.577 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.577 BaseBdev1_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 true 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 [2024-11-08 17:08:31.304854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:54.835 [2024-11-08 17:08:31.304912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.835 [2024-11-08 17:08:31.304933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:54.835 [2024-11-08 17:08:31.304944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.835 [2024-11-08 17:08:31.307251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.835 [2024-11-08 17:08:31.307394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:54.835 BaseBdev1 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 BaseBdev2_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 true 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 [2024-11-08 17:08:31.354677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:54.835 [2024-11-08 17:08:31.354850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.835 [2024-11-08 17:08:31.354875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:54.835 [2024-11-08 17:08:31.354887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.835 [2024-11-08 17:08:31.357119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.835 [2024-11-08 17:08:31.357156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:54.835 BaseBdev2 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 BaseBdev3_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 true 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 [2024-11-08 17:08:31.419064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:54.835 [2024-11-08 17:08:31.419222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.835 [2024-11-08 17:08:31.419264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:54.835 [2024-11-08 17:08:31.419641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.835 [2024-11-08 17:08:31.422000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.835 [2024-11-08 17:08:31.422036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:54.835 BaseBdev3 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 [2024-11-08 17:08:31.427142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.835 [2024-11-08 17:08:31.429157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.835 [2024-11-08 17:08:31.429314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.835 [2024-11-08 17:08:31.429554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:54.835 [2024-11-08 17:08:31.429587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:54.835 [2024-11-08 17:08:31.429955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:54.835 [2024-11-08 17:08:31.430125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:54.835 [2024-11-08 17:08:31.430137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:54.835 [2024-11-08 17:08:31.430283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.835 "name": "raid_bdev1", 00:20:54.835 "uuid": "5a195876-acea-4f2f-bb0b-54e4ef1a15e4", 00:20:54.835 "strip_size_kb": 0, 00:20:54.835 "state": "online", 00:20:54.835 "raid_level": "raid1", 00:20:54.835 "superblock": true, 00:20:54.835 "num_base_bdevs": 3, 00:20:54.835 "num_base_bdevs_discovered": 3, 00:20:54.835 "num_base_bdevs_operational": 3, 00:20:54.835 "base_bdevs_list": [ 00:20:54.835 { 00:20:54.835 "name": "BaseBdev1", 00:20:54.836 "uuid": "4ccb7ecb-7ee4-56d6-923c-53c397c06f3d", 00:20:54.836 "is_configured": true, 00:20:54.836 "data_offset": 2048, 00:20:54.836 "data_size": 63488 00:20:54.836 }, 00:20:54.836 { 00:20:54.836 "name": "BaseBdev2", 00:20:54.836 "uuid": "9ebf6746-51bb-5719-b8d5-a0dfd275f397", 00:20:54.836 "is_configured": true, 00:20:54.836 "data_offset": 2048, 00:20:54.836 "data_size": 63488 00:20:54.836 }, 00:20:54.836 { 00:20:54.836 "name": "BaseBdev3", 00:20:54.836 "uuid": "d7d99fd2-15ca-552f-a053-0a1e0f0a1243", 00:20:54.836 "is_configured": true, 00:20:54.836 "data_offset": 2048, 00:20:54.836 "data_size": 63488 00:20:54.836 } 00:20:54.836 ] 00:20:54.836 }' 00:20:54.836 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.836 17:08:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.094 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:55.094 17:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:55.351 [2024-11-08 17:08:31.848279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.285 [2024-11-08 17:08:32.769834] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:56.285 [2024-11-08 17:08:32.770051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.285 [2024-11-08 17:08:32.770288] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.285 "name": "raid_bdev1", 00:20:56.285 "uuid": "5a195876-acea-4f2f-bb0b-54e4ef1a15e4", 00:20:56.285 "strip_size_kb": 0, 00:20:56.285 "state": "online", 00:20:56.285 "raid_level": "raid1", 00:20:56.285 "superblock": true, 00:20:56.285 "num_base_bdevs": 3, 00:20:56.285 "num_base_bdevs_discovered": 2, 00:20:56.285 "num_base_bdevs_operational": 2, 00:20:56.285 "base_bdevs_list": [ 00:20:56.285 { 00:20:56.285 "name": null, 00:20:56.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.285 "is_configured": false, 00:20:56.285 "data_offset": 0, 00:20:56.285 "data_size": 63488 00:20:56.285 }, 00:20:56.285 { 00:20:56.285 "name": "BaseBdev2", 00:20:56.285 "uuid": "9ebf6746-51bb-5719-b8d5-a0dfd275f397", 00:20:56.285 "is_configured": true, 00:20:56.285 "data_offset": 2048, 00:20:56.285 "data_size": 63488 00:20:56.285 }, 00:20:56.285 { 00:20:56.285 "name": "BaseBdev3", 00:20:56.285 "uuid": "d7d99fd2-15ca-552f-a053-0a1e0f0a1243", 00:20:56.285 "is_configured": true, 00:20:56.285 "data_offset": 2048, 00:20:56.285 "data_size": 63488 00:20:56.285 } 00:20:56.285 ] 00:20:56.285 }' 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.285 17:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.544 [2024-11-08 17:08:33.099796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.544 [2024-11-08 17:08:33.099831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.544 [2024-11-08 17:08:33.102911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.544 [2024-11-08 17:08:33.102969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.544 [2024-11-08 17:08:33.103059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.544 [2024-11-08 17:08:33.103074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:56.544 { 00:20:56.544 "results": [ 00:20:56.544 { 00:20:56.544 "job": "raid_bdev1", 00:20:56.544 "core_mask": "0x1", 00:20:56.544 "workload": "randrw", 00:20:56.544 "percentage": 50, 00:20:56.544 "status": "finished", 00:20:56.544 "queue_depth": 1, 00:20:56.544 "io_size": 131072, 00:20:56.544 "runtime": 1.249526, 00:20:56.544 "iops": 13730.00641843387, 00:20:56.544 "mibps": 1716.2508023042337, 00:20:56.544 "io_failed": 0, 00:20:56.544 "io_timeout": 0, 00:20:56.544 "avg_latency_us": 69.70546621948813, 00:20:56.544 "min_latency_us": 29.341538461538462, 00:20:56.544 "max_latency_us": 1751.8276923076924 00:20:56.544 } 00:20:56.544 ], 00:20:56.544 "core_count": 1 00:20:56.544 } 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67900 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67900 ']' 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67900 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67900 00:20:56.544 killing process with pid 67900 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67900' 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67900 00:20:56.544 [2024-11-08 17:08:33.132783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.544 17:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67900 00:20:56.802 [2024-11-08 17:08:33.284561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.H132lR8o5q 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:57.736 ************************************ 00:20:57.736 END TEST raid_write_error_test 00:20:57.736 ************************************ 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:57.736 00:20:57.736 real 0m3.794s 00:20:57.736 user 0m4.434s 00:20:57.736 sys 0m0.456s 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:57.736 17:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.736 17:08:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:20:57.736 17:08:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:57.736 17:08:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:20:57.736 17:08:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:57.736 17:08:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:57.736 17:08:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.736 ************************************ 00:20:57.736 START TEST raid_state_function_test 00:20:57.736 ************************************ 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:57.736 Process raid pid: 68033 00:20:57.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=68033 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68033' 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 68033 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 68033 ']' 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:57.736 17:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.736 [2024-11-08 17:08:34.291999] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:20:57.736 [2024-11-08 17:08:34.292212] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.995 [2024-11-08 17:08:34.480014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.995 [2024-11-08 17:08:34.597889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.253 [2024-11-08 17:08:34.745783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.253 [2024-11-08 17:08:34.745836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.511 [2024-11-08 17:08:35.169670] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:58.511 [2024-11-08 17:08:35.169863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:58.511 [2024-11-08 17:08:35.169955] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:58.511 [2024-11-08 17:08:35.169994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:58.511 [2024-11-08 17:08:35.170057] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:58.511 [2024-11-08 17:08:35.170134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:58.511 [2024-11-08 17:08:35.170195] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:58.511 [2024-11-08 17:08:35.170221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.511 "name": "Existed_Raid", 00:20:58.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.511 "strip_size_kb": 64, 00:20:58.511 "state": "configuring", 00:20:58.511 "raid_level": "raid0", 00:20:58.511 "superblock": false, 00:20:58.511 "num_base_bdevs": 4, 00:20:58.511 "num_base_bdevs_discovered": 0, 00:20:58.511 "num_base_bdevs_operational": 4, 00:20:58.511 "base_bdevs_list": [ 00:20:58.511 { 00:20:58.511 "name": "BaseBdev1", 00:20:58.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.511 "is_configured": false, 00:20:58.511 "data_offset": 0, 00:20:58.511 "data_size": 0 00:20:58.511 }, 00:20:58.511 { 00:20:58.511 "name": "BaseBdev2", 00:20:58.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.511 "is_configured": false, 00:20:58.511 "data_offset": 0, 00:20:58.511 "data_size": 0 00:20:58.511 }, 00:20:58.511 { 00:20:58.511 "name": "BaseBdev3", 00:20:58.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.511 "is_configured": false, 00:20:58.511 "data_offset": 0, 00:20:58.511 "data_size": 0 00:20:58.511 }, 00:20:58.511 { 00:20:58.511 "name": "BaseBdev4", 00:20:58.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.511 "is_configured": false, 00:20:58.511 "data_offset": 0, 00:20:58.511 "data_size": 0 00:20:58.511 } 00:20:58.511 ] 00:20:58.511 }' 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.511 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 [2024-11-08 17:08:35.525704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.077 [2024-11-08 17:08:35.525748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 [2024-11-08 17:08:35.533701] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:59.077 [2024-11-08 17:08:35.533870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:59.077 [2024-11-08 17:08:35.533941] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.077 [2024-11-08 17:08:35.533970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.077 [2024-11-08 17:08:35.534052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:59.077 [2024-11-08 17:08:35.534091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:59.077 [2024-11-08 17:08:35.534120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:59.077 [2024-11-08 17:08:35.534261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 [2024-11-08 17:08:35.568187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.077 BaseBdev1 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.077 [ 00:20:59.077 { 00:20:59.077 "name": "BaseBdev1", 00:20:59.077 "aliases": [ 00:20:59.077 "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0" 00:20:59.077 ], 00:20:59.077 "product_name": "Malloc disk", 00:20:59.077 "block_size": 512, 00:20:59.077 "num_blocks": 65536, 00:20:59.077 "uuid": "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0", 00:20:59.077 "assigned_rate_limits": { 00:20:59.077 "rw_ios_per_sec": 0, 00:20:59.077 "rw_mbytes_per_sec": 0, 00:20:59.077 "r_mbytes_per_sec": 0, 00:20:59.077 "w_mbytes_per_sec": 0 00:20:59.077 }, 00:20:59.077 "claimed": true, 00:20:59.077 "claim_type": "exclusive_write", 00:20:59.077 "zoned": false, 00:20:59.077 "supported_io_types": { 00:20:59.077 "read": true, 00:20:59.077 "write": true, 00:20:59.077 "unmap": true, 00:20:59.077 "flush": true, 00:20:59.077 "reset": true, 00:20:59.077 "nvme_admin": false, 00:20:59.077 "nvme_io": false, 00:20:59.077 "nvme_io_md": false, 00:20:59.077 "write_zeroes": true, 00:20:59.077 "zcopy": true, 00:20:59.077 "get_zone_info": false, 00:20:59.077 "zone_management": false, 00:20:59.077 "zone_append": false, 00:20:59.077 "compare": false, 00:20:59.077 "compare_and_write": false, 00:20:59.077 "abort": true, 00:20:59.077 "seek_hole": false, 00:20:59.077 "seek_data": false, 00:20:59.077 "copy": true, 00:20:59.077 "nvme_iov_md": false 00:20:59.077 }, 00:20:59.077 "memory_domains": [ 00:20:59.077 { 00:20:59.077 "dma_device_id": "system", 00:20:59.077 "dma_device_type": 1 00:20:59.077 }, 00:20:59.077 { 00:20:59.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.077 "dma_device_type": 2 00:20:59.077 } 00:20:59.077 ], 00:20:59.077 "driver_specific": {} 00:20:59.077 } 00:20:59.077 ] 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:59.077 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.078 "name": "Existed_Raid", 00:20:59.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.078 "strip_size_kb": 64, 00:20:59.078 "state": "configuring", 00:20:59.078 "raid_level": "raid0", 00:20:59.078 "superblock": false, 00:20:59.078 "num_base_bdevs": 4, 00:20:59.078 "num_base_bdevs_discovered": 1, 00:20:59.078 "num_base_bdevs_operational": 4, 00:20:59.078 "base_bdevs_list": [ 00:20:59.078 { 00:20:59.078 "name": "BaseBdev1", 00:20:59.078 "uuid": "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0", 00:20:59.078 "is_configured": true, 00:20:59.078 "data_offset": 0, 00:20:59.078 "data_size": 65536 00:20:59.078 }, 00:20:59.078 { 00:20:59.078 "name": "BaseBdev2", 00:20:59.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.078 "is_configured": false, 00:20:59.078 "data_offset": 0, 00:20:59.078 "data_size": 0 00:20:59.078 }, 00:20:59.078 { 00:20:59.078 "name": "BaseBdev3", 00:20:59.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.078 "is_configured": false, 00:20:59.078 "data_offset": 0, 00:20:59.078 "data_size": 0 00:20:59.078 }, 00:20:59.078 { 00:20:59.078 "name": "BaseBdev4", 00:20:59.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.078 "is_configured": false, 00:20:59.078 "data_offset": 0, 00:20:59.078 "data_size": 0 00:20:59.078 } 00:20:59.078 ] 00:20:59.078 }' 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.078 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.336 [2024-11-08 17:08:35.944342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.336 [2024-11-08 17:08:35.944401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.336 [2024-11-08 17:08:35.952402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.336 [2024-11-08 17:08:35.954592] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.336 [2024-11-08 17:08:35.954641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.336 [2024-11-08 17:08:35.954652] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:59.336 [2024-11-08 17:08:35.954664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:59.336 [2024-11-08 17:08:35.954672] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:59.336 [2024-11-08 17:08:35.954681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.336 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.336 "name": "Existed_Raid", 00:20:59.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.336 "strip_size_kb": 64, 00:20:59.336 "state": "configuring", 00:20:59.336 "raid_level": "raid0", 00:20:59.336 "superblock": false, 00:20:59.336 "num_base_bdevs": 4, 00:20:59.336 "num_base_bdevs_discovered": 1, 00:20:59.336 "num_base_bdevs_operational": 4, 00:20:59.336 "base_bdevs_list": [ 00:20:59.336 { 00:20:59.336 "name": "BaseBdev1", 00:20:59.336 "uuid": "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0", 00:20:59.336 "is_configured": true, 00:20:59.336 "data_offset": 0, 00:20:59.336 "data_size": 65536 00:20:59.336 }, 00:20:59.336 { 00:20:59.336 "name": "BaseBdev2", 00:20:59.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.336 "is_configured": false, 00:20:59.336 "data_offset": 0, 00:20:59.336 "data_size": 0 00:20:59.336 }, 00:20:59.336 { 00:20:59.336 "name": "BaseBdev3", 00:20:59.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.336 "is_configured": false, 00:20:59.336 "data_offset": 0, 00:20:59.336 "data_size": 0 00:20:59.336 }, 00:20:59.336 { 00:20:59.336 "name": "BaseBdev4", 00:20:59.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.336 "is_configured": false, 00:20:59.336 "data_offset": 0, 00:20:59.336 "data_size": 0 00:20:59.336 } 00:20:59.336 ] 00:20:59.336 }' 00:20:59.337 17:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.337 17:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.903 [2024-11-08 17:08:36.349418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:59.903 BaseBdev2 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:59.903 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.904 [ 00:20:59.904 { 00:20:59.904 "name": "BaseBdev2", 00:20:59.904 "aliases": [ 00:20:59.904 "ab4b816f-da4e-442a-b78b-096c115a2d61" 00:20:59.904 ], 00:20:59.904 "product_name": "Malloc disk", 00:20:59.904 "block_size": 512, 00:20:59.904 "num_blocks": 65536, 00:20:59.904 "uuid": "ab4b816f-da4e-442a-b78b-096c115a2d61", 00:20:59.904 "assigned_rate_limits": { 00:20:59.904 "rw_ios_per_sec": 0, 00:20:59.904 "rw_mbytes_per_sec": 0, 00:20:59.904 "r_mbytes_per_sec": 0, 00:20:59.904 "w_mbytes_per_sec": 0 00:20:59.904 }, 00:20:59.904 "claimed": true, 00:20:59.904 "claim_type": "exclusive_write", 00:20:59.904 "zoned": false, 00:20:59.904 "supported_io_types": { 00:20:59.904 "read": true, 00:20:59.904 "write": true, 00:20:59.904 "unmap": true, 00:20:59.904 "flush": true, 00:20:59.904 "reset": true, 00:20:59.904 "nvme_admin": false, 00:20:59.904 "nvme_io": false, 00:20:59.904 "nvme_io_md": false, 00:20:59.904 "write_zeroes": true, 00:20:59.904 "zcopy": true, 00:20:59.904 "get_zone_info": false, 00:20:59.904 "zone_management": false, 00:20:59.904 "zone_append": false, 00:20:59.904 "compare": false, 00:20:59.904 "compare_and_write": false, 00:20:59.904 "abort": true, 00:20:59.904 "seek_hole": false, 00:20:59.904 "seek_data": false, 00:20:59.904 "copy": true, 00:20:59.904 "nvme_iov_md": false 00:20:59.904 }, 00:20:59.904 "memory_domains": [ 00:20:59.904 { 00:20:59.904 "dma_device_id": "system", 00:20:59.904 "dma_device_type": 1 00:20:59.904 }, 00:20:59.904 { 00:20:59.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.904 "dma_device_type": 2 00:20:59.904 } 00:20:59.904 ], 00:20:59.904 "driver_specific": {} 00:20:59.904 } 00:20:59.904 ] 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.904 "name": "Existed_Raid", 00:20:59.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.904 "strip_size_kb": 64, 00:20:59.904 "state": "configuring", 00:20:59.904 "raid_level": "raid0", 00:20:59.904 "superblock": false, 00:20:59.904 "num_base_bdevs": 4, 00:20:59.904 "num_base_bdevs_discovered": 2, 00:20:59.904 "num_base_bdevs_operational": 4, 00:20:59.904 "base_bdevs_list": [ 00:20:59.904 { 00:20:59.904 "name": "BaseBdev1", 00:20:59.904 "uuid": "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0", 00:20:59.904 "is_configured": true, 00:20:59.904 "data_offset": 0, 00:20:59.904 "data_size": 65536 00:20:59.904 }, 00:20:59.904 { 00:20:59.904 "name": "BaseBdev2", 00:20:59.904 "uuid": "ab4b816f-da4e-442a-b78b-096c115a2d61", 00:20:59.904 "is_configured": true, 00:20:59.904 "data_offset": 0, 00:20:59.904 "data_size": 65536 00:20:59.904 }, 00:20:59.904 { 00:20:59.904 "name": "BaseBdev3", 00:20:59.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.904 "is_configured": false, 00:20:59.904 "data_offset": 0, 00:20:59.904 "data_size": 0 00:20:59.904 }, 00:20:59.904 { 00:20:59.904 "name": "BaseBdev4", 00:20:59.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.904 "is_configured": false, 00:20:59.904 "data_offset": 0, 00:20:59.904 "data_size": 0 00:20:59.904 } 00:20:59.904 ] 00:20:59.904 }' 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.904 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.173 [2024-11-08 17:08:36.771383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.173 BaseBdev3 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.173 [ 00:21:00.173 { 00:21:00.173 "name": "BaseBdev3", 00:21:00.173 "aliases": [ 00:21:00.173 "2d5e6f55-1220-4d2a-bd39-4788a5f2cff5" 00:21:00.173 ], 00:21:00.173 "product_name": "Malloc disk", 00:21:00.173 "block_size": 512, 00:21:00.173 "num_blocks": 65536, 00:21:00.173 "uuid": "2d5e6f55-1220-4d2a-bd39-4788a5f2cff5", 00:21:00.173 "assigned_rate_limits": { 00:21:00.173 "rw_ios_per_sec": 0, 00:21:00.173 "rw_mbytes_per_sec": 0, 00:21:00.173 "r_mbytes_per_sec": 0, 00:21:00.173 "w_mbytes_per_sec": 0 00:21:00.173 }, 00:21:00.173 "claimed": true, 00:21:00.173 "claim_type": "exclusive_write", 00:21:00.173 "zoned": false, 00:21:00.173 "supported_io_types": { 00:21:00.173 "read": true, 00:21:00.173 "write": true, 00:21:00.173 "unmap": true, 00:21:00.173 "flush": true, 00:21:00.173 "reset": true, 00:21:00.173 "nvme_admin": false, 00:21:00.173 "nvme_io": false, 00:21:00.173 "nvme_io_md": false, 00:21:00.173 "write_zeroes": true, 00:21:00.173 "zcopy": true, 00:21:00.173 "get_zone_info": false, 00:21:00.173 "zone_management": false, 00:21:00.173 "zone_append": false, 00:21:00.173 "compare": false, 00:21:00.173 "compare_and_write": false, 00:21:00.173 "abort": true, 00:21:00.173 "seek_hole": false, 00:21:00.173 "seek_data": false, 00:21:00.173 "copy": true, 00:21:00.173 "nvme_iov_md": false 00:21:00.173 }, 00:21:00.173 "memory_domains": [ 00:21:00.173 { 00:21:00.173 "dma_device_id": "system", 00:21:00.173 "dma_device_type": 1 00:21:00.173 }, 00:21:00.173 { 00:21:00.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.173 "dma_device_type": 2 00:21:00.173 } 00:21:00.173 ], 00:21:00.173 "driver_specific": {} 00:21:00.173 } 00:21:00.173 ] 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.173 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.173 "name": "Existed_Raid", 00:21:00.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.173 "strip_size_kb": 64, 00:21:00.173 "state": "configuring", 00:21:00.173 "raid_level": "raid0", 00:21:00.173 "superblock": false, 00:21:00.173 "num_base_bdevs": 4, 00:21:00.173 "num_base_bdevs_discovered": 3, 00:21:00.173 "num_base_bdevs_operational": 4, 00:21:00.173 "base_bdevs_list": [ 00:21:00.173 { 00:21:00.173 "name": "BaseBdev1", 00:21:00.173 "uuid": "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0", 00:21:00.173 "is_configured": true, 00:21:00.173 "data_offset": 0, 00:21:00.173 "data_size": 65536 00:21:00.173 }, 00:21:00.173 { 00:21:00.173 "name": "BaseBdev2", 00:21:00.173 "uuid": "ab4b816f-da4e-442a-b78b-096c115a2d61", 00:21:00.173 "is_configured": true, 00:21:00.173 "data_offset": 0, 00:21:00.173 "data_size": 65536 00:21:00.173 }, 00:21:00.173 { 00:21:00.173 "name": "BaseBdev3", 00:21:00.173 "uuid": "2d5e6f55-1220-4d2a-bd39-4788a5f2cff5", 00:21:00.173 "is_configured": true, 00:21:00.174 "data_offset": 0, 00:21:00.174 "data_size": 65536 00:21:00.174 }, 00:21:00.174 { 00:21:00.174 "name": "BaseBdev4", 00:21:00.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.174 "is_configured": false, 00:21:00.174 "data_offset": 0, 00:21:00.174 "data_size": 0 00:21:00.174 } 00:21:00.174 ] 00:21:00.174 }' 00:21:00.174 17:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.174 17:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.432 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:00.432 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.432 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.432 [2024-11-08 17:08:37.144275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:00.432 [2024-11-08 17:08:37.144506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:00.432 [2024-11-08 17:08:37.144539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:00.432 [2024-11-08 17:08:37.144907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:00.432 [2024-11-08 17:08:37.145151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:00.432 [2024-11-08 17:08:37.145225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:00.432 [2024-11-08 17:08:37.145552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.696 BaseBdev4 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 [ 00:21:00.696 { 00:21:00.696 "name": "BaseBdev4", 00:21:00.696 "aliases": [ 00:21:00.696 "60d7615e-747f-4b27-a160-1e829d7358bf" 00:21:00.696 ], 00:21:00.696 "product_name": "Malloc disk", 00:21:00.696 "block_size": 512, 00:21:00.696 "num_blocks": 65536, 00:21:00.696 "uuid": "60d7615e-747f-4b27-a160-1e829d7358bf", 00:21:00.696 "assigned_rate_limits": { 00:21:00.696 "rw_ios_per_sec": 0, 00:21:00.696 "rw_mbytes_per_sec": 0, 00:21:00.696 "r_mbytes_per_sec": 0, 00:21:00.696 "w_mbytes_per_sec": 0 00:21:00.696 }, 00:21:00.696 "claimed": true, 00:21:00.696 "claim_type": "exclusive_write", 00:21:00.696 "zoned": false, 00:21:00.696 "supported_io_types": { 00:21:00.696 "read": true, 00:21:00.696 "write": true, 00:21:00.696 "unmap": true, 00:21:00.696 "flush": true, 00:21:00.696 "reset": true, 00:21:00.696 "nvme_admin": false, 00:21:00.696 "nvme_io": false, 00:21:00.696 "nvme_io_md": false, 00:21:00.696 "write_zeroes": true, 00:21:00.696 "zcopy": true, 00:21:00.696 "get_zone_info": false, 00:21:00.696 "zone_management": false, 00:21:00.696 "zone_append": false, 00:21:00.696 "compare": false, 00:21:00.696 "compare_and_write": false, 00:21:00.696 "abort": true, 00:21:00.696 "seek_hole": false, 00:21:00.696 "seek_data": false, 00:21:00.696 "copy": true, 00:21:00.696 "nvme_iov_md": false 00:21:00.696 }, 00:21:00.696 "memory_domains": [ 00:21:00.696 { 00:21:00.696 "dma_device_id": "system", 00:21:00.696 "dma_device_type": 1 00:21:00.696 }, 00:21:00.696 { 00:21:00.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.696 "dma_device_type": 2 00:21:00.696 } 00:21:00.696 ], 00:21:00.696 "driver_specific": {} 00:21:00.696 } 00:21:00.696 ] 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:00.696 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.697 "name": "Existed_Raid", 00:21:00.697 "uuid": "b79e0f7b-c263-4c82-b663-4bef6d071ca4", 00:21:00.697 "strip_size_kb": 64, 00:21:00.697 "state": "online", 00:21:00.697 "raid_level": "raid0", 00:21:00.697 "superblock": false, 00:21:00.697 "num_base_bdevs": 4, 00:21:00.697 "num_base_bdevs_discovered": 4, 00:21:00.697 "num_base_bdevs_operational": 4, 00:21:00.697 "base_bdevs_list": [ 00:21:00.697 { 00:21:00.697 "name": "BaseBdev1", 00:21:00.697 "uuid": "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0", 00:21:00.697 "is_configured": true, 00:21:00.697 "data_offset": 0, 00:21:00.697 "data_size": 65536 00:21:00.697 }, 00:21:00.697 { 00:21:00.697 "name": "BaseBdev2", 00:21:00.697 "uuid": "ab4b816f-da4e-442a-b78b-096c115a2d61", 00:21:00.697 "is_configured": true, 00:21:00.697 "data_offset": 0, 00:21:00.697 "data_size": 65536 00:21:00.697 }, 00:21:00.697 { 00:21:00.697 "name": "BaseBdev3", 00:21:00.697 "uuid": "2d5e6f55-1220-4d2a-bd39-4788a5f2cff5", 00:21:00.697 "is_configured": true, 00:21:00.697 "data_offset": 0, 00:21:00.697 "data_size": 65536 00:21:00.697 }, 00:21:00.697 { 00:21:00.697 "name": "BaseBdev4", 00:21:00.697 "uuid": "60d7615e-747f-4b27-a160-1e829d7358bf", 00:21:00.697 "is_configured": true, 00:21:00.697 "data_offset": 0, 00:21:00.697 "data_size": 65536 00:21:00.697 } 00:21:00.697 ] 00:21:00.697 }' 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.697 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.979 [2024-11-08 17:08:37.488814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.979 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:00.979 "name": "Existed_Raid", 00:21:00.979 "aliases": [ 00:21:00.979 "b79e0f7b-c263-4c82-b663-4bef6d071ca4" 00:21:00.979 ], 00:21:00.979 "product_name": "Raid Volume", 00:21:00.979 "block_size": 512, 00:21:00.979 "num_blocks": 262144, 00:21:00.979 "uuid": "b79e0f7b-c263-4c82-b663-4bef6d071ca4", 00:21:00.979 "assigned_rate_limits": { 00:21:00.979 "rw_ios_per_sec": 0, 00:21:00.980 "rw_mbytes_per_sec": 0, 00:21:00.980 "r_mbytes_per_sec": 0, 00:21:00.980 "w_mbytes_per_sec": 0 00:21:00.980 }, 00:21:00.980 "claimed": false, 00:21:00.980 "zoned": false, 00:21:00.980 "supported_io_types": { 00:21:00.980 "read": true, 00:21:00.980 "write": true, 00:21:00.980 "unmap": true, 00:21:00.980 "flush": true, 00:21:00.980 "reset": true, 00:21:00.980 "nvme_admin": false, 00:21:00.980 "nvme_io": false, 00:21:00.980 "nvme_io_md": false, 00:21:00.980 "write_zeroes": true, 00:21:00.980 "zcopy": false, 00:21:00.980 "get_zone_info": false, 00:21:00.980 "zone_management": false, 00:21:00.980 "zone_append": false, 00:21:00.980 "compare": false, 00:21:00.980 "compare_and_write": false, 00:21:00.980 "abort": false, 00:21:00.980 "seek_hole": false, 00:21:00.980 "seek_data": false, 00:21:00.980 "copy": false, 00:21:00.980 "nvme_iov_md": false 00:21:00.980 }, 00:21:00.980 "memory_domains": [ 00:21:00.980 { 00:21:00.980 "dma_device_id": "system", 00:21:00.980 "dma_device_type": 1 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.980 "dma_device_type": 2 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "dma_device_id": "system", 00:21:00.980 "dma_device_type": 1 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.980 "dma_device_type": 2 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "dma_device_id": "system", 00:21:00.980 "dma_device_type": 1 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.980 "dma_device_type": 2 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "dma_device_id": "system", 00:21:00.980 "dma_device_type": 1 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.980 "dma_device_type": 2 00:21:00.980 } 00:21:00.980 ], 00:21:00.980 "driver_specific": { 00:21:00.980 "raid": { 00:21:00.980 "uuid": "b79e0f7b-c263-4c82-b663-4bef6d071ca4", 00:21:00.980 "strip_size_kb": 64, 00:21:00.980 "state": "online", 00:21:00.980 "raid_level": "raid0", 00:21:00.980 "superblock": false, 00:21:00.980 "num_base_bdevs": 4, 00:21:00.980 "num_base_bdevs_discovered": 4, 00:21:00.980 "num_base_bdevs_operational": 4, 00:21:00.980 "base_bdevs_list": [ 00:21:00.980 { 00:21:00.980 "name": "BaseBdev1", 00:21:00.980 "uuid": "463f0af3-1e9f-45eb-bfd4-3d6449e0afe0", 00:21:00.980 "is_configured": true, 00:21:00.980 "data_offset": 0, 00:21:00.980 "data_size": 65536 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "name": "BaseBdev2", 00:21:00.980 "uuid": "ab4b816f-da4e-442a-b78b-096c115a2d61", 00:21:00.980 "is_configured": true, 00:21:00.980 "data_offset": 0, 00:21:00.980 "data_size": 65536 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "name": "BaseBdev3", 00:21:00.980 "uuid": "2d5e6f55-1220-4d2a-bd39-4788a5f2cff5", 00:21:00.980 "is_configured": true, 00:21:00.980 "data_offset": 0, 00:21:00.980 "data_size": 65536 00:21:00.980 }, 00:21:00.980 { 00:21:00.980 "name": "BaseBdev4", 00:21:00.980 "uuid": "60d7615e-747f-4b27-a160-1e829d7358bf", 00:21:00.980 "is_configured": true, 00:21:00.980 "data_offset": 0, 00:21:00.980 "data_size": 65536 00:21:00.980 } 00:21:00.980 ] 00:21:00.980 } 00:21:00.980 } 00:21:00.980 }' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:00.980 BaseBdev2 00:21:00.980 BaseBdev3 00:21:00.980 BaseBdev4' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.980 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.239 [2024-11-08 17:08:37.724562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:01.239 [2024-11-08 17:08:37.724689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.239 [2024-11-08 17:08:37.724808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.239 "name": "Existed_Raid", 00:21:01.239 "uuid": "b79e0f7b-c263-4c82-b663-4bef6d071ca4", 00:21:01.239 "strip_size_kb": 64, 00:21:01.239 "state": "offline", 00:21:01.239 "raid_level": "raid0", 00:21:01.239 "superblock": false, 00:21:01.239 "num_base_bdevs": 4, 00:21:01.239 "num_base_bdevs_discovered": 3, 00:21:01.239 "num_base_bdevs_operational": 3, 00:21:01.239 "base_bdevs_list": [ 00:21:01.239 { 00:21:01.239 "name": null, 00:21:01.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.239 "is_configured": false, 00:21:01.239 "data_offset": 0, 00:21:01.239 "data_size": 65536 00:21:01.239 }, 00:21:01.239 { 00:21:01.239 "name": "BaseBdev2", 00:21:01.239 "uuid": "ab4b816f-da4e-442a-b78b-096c115a2d61", 00:21:01.239 "is_configured": true, 00:21:01.239 "data_offset": 0, 00:21:01.239 "data_size": 65536 00:21:01.239 }, 00:21:01.239 { 00:21:01.239 "name": "BaseBdev3", 00:21:01.239 "uuid": "2d5e6f55-1220-4d2a-bd39-4788a5f2cff5", 00:21:01.239 "is_configured": true, 00:21:01.239 "data_offset": 0, 00:21:01.239 "data_size": 65536 00:21:01.239 }, 00:21:01.239 { 00:21:01.239 "name": "BaseBdev4", 00:21:01.239 "uuid": "60d7615e-747f-4b27-a160-1e829d7358bf", 00:21:01.239 "is_configured": true, 00:21:01.239 "data_offset": 0, 00:21:01.239 "data_size": 65536 00:21:01.239 } 00:21:01.239 ] 00:21:01.239 }' 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.239 17:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.497 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.497 [2024-11-08 17:08:38.165863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.756 [2024-11-08 17:08:38.268107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.756 [2024-11-08 17:08:38.370453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:01.756 [2024-11-08 17:08:38.370630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.756 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.016 BaseBdev2 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.016 [ 00:21:02.016 { 00:21:02.016 "name": "BaseBdev2", 00:21:02.016 "aliases": [ 00:21:02.016 "628f75ce-e656-4959-911a-e8dbd267dabe" 00:21:02.016 ], 00:21:02.016 "product_name": "Malloc disk", 00:21:02.016 "block_size": 512, 00:21:02.016 "num_blocks": 65536, 00:21:02.016 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:02.016 "assigned_rate_limits": { 00:21:02.016 "rw_ios_per_sec": 0, 00:21:02.016 "rw_mbytes_per_sec": 0, 00:21:02.016 "r_mbytes_per_sec": 0, 00:21:02.016 "w_mbytes_per_sec": 0 00:21:02.016 }, 00:21:02.016 "claimed": false, 00:21:02.016 "zoned": false, 00:21:02.016 "supported_io_types": { 00:21:02.016 "read": true, 00:21:02.016 "write": true, 00:21:02.016 "unmap": true, 00:21:02.016 "flush": true, 00:21:02.016 "reset": true, 00:21:02.016 "nvme_admin": false, 00:21:02.016 "nvme_io": false, 00:21:02.016 "nvme_io_md": false, 00:21:02.016 "write_zeroes": true, 00:21:02.016 "zcopy": true, 00:21:02.016 "get_zone_info": false, 00:21:02.016 "zone_management": false, 00:21:02.016 "zone_append": false, 00:21:02.016 "compare": false, 00:21:02.016 "compare_and_write": false, 00:21:02.016 "abort": true, 00:21:02.016 "seek_hole": false, 00:21:02.016 "seek_data": false, 00:21:02.016 "copy": true, 00:21:02.016 "nvme_iov_md": false 00:21:02.016 }, 00:21:02.016 "memory_domains": [ 00:21:02.016 { 00:21:02.016 "dma_device_id": "system", 00:21:02.016 "dma_device_type": 1 00:21:02.016 }, 00:21:02.016 { 00:21:02.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.016 "dma_device_type": 2 00:21:02.016 } 00:21:02.016 ], 00:21:02.016 "driver_specific": {} 00:21:02.016 } 00:21:02.016 ] 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:02.016 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 BaseBdev3 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 [ 00:21:02.017 { 00:21:02.017 "name": "BaseBdev3", 00:21:02.017 "aliases": [ 00:21:02.017 "53270531-b273-4fc5-abac-84c6495d047e" 00:21:02.017 ], 00:21:02.017 "product_name": "Malloc disk", 00:21:02.017 "block_size": 512, 00:21:02.017 "num_blocks": 65536, 00:21:02.017 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:02.017 "assigned_rate_limits": { 00:21:02.017 "rw_ios_per_sec": 0, 00:21:02.017 "rw_mbytes_per_sec": 0, 00:21:02.017 "r_mbytes_per_sec": 0, 00:21:02.017 "w_mbytes_per_sec": 0 00:21:02.017 }, 00:21:02.017 "claimed": false, 00:21:02.017 "zoned": false, 00:21:02.017 "supported_io_types": { 00:21:02.017 "read": true, 00:21:02.017 "write": true, 00:21:02.017 "unmap": true, 00:21:02.017 "flush": true, 00:21:02.017 "reset": true, 00:21:02.017 "nvme_admin": false, 00:21:02.017 "nvme_io": false, 00:21:02.017 "nvme_io_md": false, 00:21:02.017 "write_zeroes": true, 00:21:02.017 "zcopy": true, 00:21:02.017 "get_zone_info": false, 00:21:02.017 "zone_management": false, 00:21:02.017 "zone_append": false, 00:21:02.017 "compare": false, 00:21:02.017 "compare_and_write": false, 00:21:02.017 "abort": true, 00:21:02.017 "seek_hole": false, 00:21:02.017 "seek_data": false, 00:21:02.017 "copy": true, 00:21:02.017 "nvme_iov_md": false 00:21:02.017 }, 00:21:02.017 "memory_domains": [ 00:21:02.017 { 00:21:02.017 "dma_device_id": "system", 00:21:02.017 "dma_device_type": 1 00:21:02.017 }, 00:21:02.017 { 00:21:02.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.017 "dma_device_type": 2 00:21:02.017 } 00:21:02.017 ], 00:21:02.017 "driver_specific": {} 00:21:02.017 } 00:21:02.017 ] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 BaseBdev4 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 [ 00:21:02.017 { 00:21:02.017 "name": "BaseBdev4", 00:21:02.017 "aliases": [ 00:21:02.017 "cd20e18d-7078-4051-8aba-d0069dad71a0" 00:21:02.017 ], 00:21:02.017 "product_name": "Malloc disk", 00:21:02.017 "block_size": 512, 00:21:02.017 "num_blocks": 65536, 00:21:02.017 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:02.017 "assigned_rate_limits": { 00:21:02.017 "rw_ios_per_sec": 0, 00:21:02.017 "rw_mbytes_per_sec": 0, 00:21:02.017 "r_mbytes_per_sec": 0, 00:21:02.017 "w_mbytes_per_sec": 0 00:21:02.017 }, 00:21:02.017 "claimed": false, 00:21:02.017 "zoned": false, 00:21:02.017 "supported_io_types": { 00:21:02.017 "read": true, 00:21:02.017 "write": true, 00:21:02.017 "unmap": true, 00:21:02.017 "flush": true, 00:21:02.017 "reset": true, 00:21:02.017 "nvme_admin": false, 00:21:02.017 "nvme_io": false, 00:21:02.017 "nvme_io_md": false, 00:21:02.017 "write_zeroes": true, 00:21:02.017 "zcopy": true, 00:21:02.017 "get_zone_info": false, 00:21:02.017 "zone_management": false, 00:21:02.017 "zone_append": false, 00:21:02.017 "compare": false, 00:21:02.017 "compare_and_write": false, 00:21:02.017 "abort": true, 00:21:02.017 "seek_hole": false, 00:21:02.017 "seek_data": false, 00:21:02.017 "copy": true, 00:21:02.017 "nvme_iov_md": false 00:21:02.017 }, 00:21:02.017 "memory_domains": [ 00:21:02.017 { 00:21:02.017 "dma_device_id": "system", 00:21:02.017 "dma_device_type": 1 00:21:02.017 }, 00:21:02.017 { 00:21:02.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.017 "dma_device_type": 2 00:21:02.017 } 00:21:02.017 ], 00:21:02.017 "driver_specific": {} 00:21:02.017 } 00:21:02.017 ] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 [2024-11-08 17:08:38.646444] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:02.017 [2024-11-08 17:08:38.646586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:02.017 [2024-11-08 17:08:38.646619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:02.017 [2024-11-08 17:08:38.648619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.017 [2024-11-08 17:08:38.648670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.017 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.017 "name": "Existed_Raid", 00:21:02.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.017 "strip_size_kb": 64, 00:21:02.017 "state": "configuring", 00:21:02.017 "raid_level": "raid0", 00:21:02.017 "superblock": false, 00:21:02.017 "num_base_bdevs": 4, 00:21:02.017 "num_base_bdevs_discovered": 3, 00:21:02.017 "num_base_bdevs_operational": 4, 00:21:02.017 "base_bdevs_list": [ 00:21:02.017 { 00:21:02.017 "name": "BaseBdev1", 00:21:02.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.017 "is_configured": false, 00:21:02.017 "data_offset": 0, 00:21:02.018 "data_size": 0 00:21:02.018 }, 00:21:02.018 { 00:21:02.018 "name": "BaseBdev2", 00:21:02.018 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:02.018 "is_configured": true, 00:21:02.018 "data_offset": 0, 00:21:02.018 "data_size": 65536 00:21:02.018 }, 00:21:02.018 { 00:21:02.018 "name": "BaseBdev3", 00:21:02.018 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:02.018 "is_configured": true, 00:21:02.018 "data_offset": 0, 00:21:02.018 "data_size": 65536 00:21:02.018 }, 00:21:02.018 { 00:21:02.018 "name": "BaseBdev4", 00:21:02.018 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:02.018 "is_configured": true, 00:21:02.018 "data_offset": 0, 00:21:02.018 "data_size": 65536 00:21:02.018 } 00:21:02.018 ] 00:21:02.018 }' 00:21:02.018 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.018 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.277 [2024-11-08 17:08:38.982535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.277 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.278 17:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.541 17:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.541 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.541 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.541 "name": "Existed_Raid", 00:21:02.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.541 "strip_size_kb": 64, 00:21:02.541 "state": "configuring", 00:21:02.541 "raid_level": "raid0", 00:21:02.541 "superblock": false, 00:21:02.541 "num_base_bdevs": 4, 00:21:02.541 "num_base_bdevs_discovered": 2, 00:21:02.541 "num_base_bdevs_operational": 4, 00:21:02.541 "base_bdevs_list": [ 00:21:02.541 { 00:21:02.541 "name": "BaseBdev1", 00:21:02.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.541 "is_configured": false, 00:21:02.541 "data_offset": 0, 00:21:02.541 "data_size": 0 00:21:02.541 }, 00:21:02.541 { 00:21:02.541 "name": null, 00:21:02.541 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:02.541 "is_configured": false, 00:21:02.541 "data_offset": 0, 00:21:02.541 "data_size": 65536 00:21:02.541 }, 00:21:02.541 { 00:21:02.541 "name": "BaseBdev3", 00:21:02.541 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:02.541 "is_configured": true, 00:21:02.541 "data_offset": 0, 00:21:02.541 "data_size": 65536 00:21:02.541 }, 00:21:02.541 { 00:21:02.541 "name": "BaseBdev4", 00:21:02.541 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:02.541 "is_configured": true, 00:21:02.541 "data_offset": 0, 00:21:02.541 "data_size": 65536 00:21:02.541 } 00:21:02.541 ] 00:21:02.541 }' 00:21:02.541 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.541 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.800 [2024-11-08 17:08:39.354959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:02.800 BaseBdev1 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.800 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.800 [ 00:21:02.800 { 00:21:02.800 "name": "BaseBdev1", 00:21:02.800 "aliases": [ 00:21:02.800 "763b4838-ce4a-4183-8501-a86383326315" 00:21:02.800 ], 00:21:02.800 "product_name": "Malloc disk", 00:21:02.800 "block_size": 512, 00:21:02.800 "num_blocks": 65536, 00:21:02.800 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:02.800 "assigned_rate_limits": { 00:21:02.800 "rw_ios_per_sec": 0, 00:21:02.800 "rw_mbytes_per_sec": 0, 00:21:02.800 "r_mbytes_per_sec": 0, 00:21:02.800 "w_mbytes_per_sec": 0 00:21:02.800 }, 00:21:02.800 "claimed": true, 00:21:02.800 "claim_type": "exclusive_write", 00:21:02.800 "zoned": false, 00:21:02.800 "supported_io_types": { 00:21:02.800 "read": true, 00:21:02.800 "write": true, 00:21:02.800 "unmap": true, 00:21:02.800 "flush": true, 00:21:02.800 "reset": true, 00:21:02.800 "nvme_admin": false, 00:21:02.800 "nvme_io": false, 00:21:02.800 "nvme_io_md": false, 00:21:02.800 "write_zeroes": true, 00:21:02.800 "zcopy": true, 00:21:02.800 "get_zone_info": false, 00:21:02.801 "zone_management": false, 00:21:02.801 "zone_append": false, 00:21:02.801 "compare": false, 00:21:02.801 "compare_and_write": false, 00:21:02.801 "abort": true, 00:21:02.801 "seek_hole": false, 00:21:02.801 "seek_data": false, 00:21:02.801 "copy": true, 00:21:02.801 "nvme_iov_md": false 00:21:02.801 }, 00:21:02.801 "memory_domains": [ 00:21:02.801 { 00:21:02.801 "dma_device_id": "system", 00:21:02.801 "dma_device_type": 1 00:21:02.801 }, 00:21:02.801 { 00:21:02.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.801 "dma_device_type": 2 00:21:02.801 } 00:21:02.801 ], 00:21:02.801 "driver_specific": {} 00:21:02.801 } 00:21:02.801 ] 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.801 "name": "Existed_Raid", 00:21:02.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.801 "strip_size_kb": 64, 00:21:02.801 "state": "configuring", 00:21:02.801 "raid_level": "raid0", 00:21:02.801 "superblock": false, 00:21:02.801 "num_base_bdevs": 4, 00:21:02.801 "num_base_bdevs_discovered": 3, 00:21:02.801 "num_base_bdevs_operational": 4, 00:21:02.801 "base_bdevs_list": [ 00:21:02.801 { 00:21:02.801 "name": "BaseBdev1", 00:21:02.801 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:02.801 "is_configured": true, 00:21:02.801 "data_offset": 0, 00:21:02.801 "data_size": 65536 00:21:02.801 }, 00:21:02.801 { 00:21:02.801 "name": null, 00:21:02.801 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:02.801 "is_configured": false, 00:21:02.801 "data_offset": 0, 00:21:02.801 "data_size": 65536 00:21:02.801 }, 00:21:02.801 { 00:21:02.801 "name": "BaseBdev3", 00:21:02.801 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:02.801 "is_configured": true, 00:21:02.801 "data_offset": 0, 00:21:02.801 "data_size": 65536 00:21:02.801 }, 00:21:02.801 { 00:21:02.801 "name": "BaseBdev4", 00:21:02.801 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:02.801 "is_configured": true, 00:21:02.801 "data_offset": 0, 00:21:02.801 "data_size": 65536 00:21:02.801 } 00:21:02.801 ] 00:21:02.801 }' 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.801 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.066 [2024-11-08 17:08:39.747136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.066 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.327 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.327 "name": "Existed_Raid", 00:21:03.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.327 "strip_size_kb": 64, 00:21:03.327 "state": "configuring", 00:21:03.327 "raid_level": "raid0", 00:21:03.327 "superblock": false, 00:21:03.327 "num_base_bdevs": 4, 00:21:03.327 "num_base_bdevs_discovered": 2, 00:21:03.327 "num_base_bdevs_operational": 4, 00:21:03.327 "base_bdevs_list": [ 00:21:03.327 { 00:21:03.327 "name": "BaseBdev1", 00:21:03.327 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:03.327 "is_configured": true, 00:21:03.327 "data_offset": 0, 00:21:03.327 "data_size": 65536 00:21:03.327 }, 00:21:03.327 { 00:21:03.327 "name": null, 00:21:03.327 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:03.327 "is_configured": false, 00:21:03.327 "data_offset": 0, 00:21:03.327 "data_size": 65536 00:21:03.327 }, 00:21:03.327 { 00:21:03.327 "name": null, 00:21:03.327 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:03.327 "is_configured": false, 00:21:03.327 "data_offset": 0, 00:21:03.327 "data_size": 65536 00:21:03.327 }, 00:21:03.327 { 00:21:03.327 "name": "BaseBdev4", 00:21:03.327 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:03.327 "is_configured": true, 00:21:03.327 "data_offset": 0, 00:21:03.327 "data_size": 65536 00:21:03.327 } 00:21:03.327 ] 00:21:03.327 }' 00:21:03.327 17:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.327 17:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.585 [2024-11-08 17:08:40.143241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.585 "name": "Existed_Raid", 00:21:03.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.585 "strip_size_kb": 64, 00:21:03.585 "state": "configuring", 00:21:03.585 "raid_level": "raid0", 00:21:03.585 "superblock": false, 00:21:03.585 "num_base_bdevs": 4, 00:21:03.585 "num_base_bdevs_discovered": 3, 00:21:03.585 "num_base_bdevs_operational": 4, 00:21:03.585 "base_bdevs_list": [ 00:21:03.585 { 00:21:03.585 "name": "BaseBdev1", 00:21:03.585 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:03.585 "is_configured": true, 00:21:03.585 "data_offset": 0, 00:21:03.585 "data_size": 65536 00:21:03.585 }, 00:21:03.585 { 00:21:03.585 "name": null, 00:21:03.585 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:03.585 "is_configured": false, 00:21:03.585 "data_offset": 0, 00:21:03.585 "data_size": 65536 00:21:03.585 }, 00:21:03.585 { 00:21:03.585 "name": "BaseBdev3", 00:21:03.585 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:03.585 "is_configured": true, 00:21:03.585 "data_offset": 0, 00:21:03.585 "data_size": 65536 00:21:03.585 }, 00:21:03.585 { 00:21:03.585 "name": "BaseBdev4", 00:21:03.585 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:03.585 "is_configured": true, 00:21:03.585 "data_offset": 0, 00:21:03.585 "data_size": 65536 00:21:03.585 } 00:21:03.585 ] 00:21:03.585 }' 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.585 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.842 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.842 [2024-11-08 17:08:40.531373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.100 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.100 "name": "Existed_Raid", 00:21:04.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.100 "strip_size_kb": 64, 00:21:04.100 "state": "configuring", 00:21:04.100 "raid_level": "raid0", 00:21:04.100 "superblock": false, 00:21:04.100 "num_base_bdevs": 4, 00:21:04.100 "num_base_bdevs_discovered": 2, 00:21:04.100 "num_base_bdevs_operational": 4, 00:21:04.100 "base_bdevs_list": [ 00:21:04.100 { 00:21:04.100 "name": null, 00:21:04.100 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:04.100 "is_configured": false, 00:21:04.100 "data_offset": 0, 00:21:04.100 "data_size": 65536 00:21:04.100 }, 00:21:04.100 { 00:21:04.100 "name": null, 00:21:04.100 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:04.100 "is_configured": false, 00:21:04.100 "data_offset": 0, 00:21:04.100 "data_size": 65536 00:21:04.100 }, 00:21:04.100 { 00:21:04.100 "name": "BaseBdev3", 00:21:04.100 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:04.100 "is_configured": true, 00:21:04.100 "data_offset": 0, 00:21:04.100 "data_size": 65536 00:21:04.100 }, 00:21:04.100 { 00:21:04.100 "name": "BaseBdev4", 00:21:04.100 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:04.100 "is_configured": true, 00:21:04.101 "data_offset": 0, 00:21:04.101 "data_size": 65536 00:21:04.101 } 00:21:04.101 ] 00:21:04.101 }' 00:21:04.101 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.101 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.357 [2024-11-08 17:08:40.983115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:04.357 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.358 17:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.358 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.358 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.358 "name": "Existed_Raid", 00:21:04.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.358 "strip_size_kb": 64, 00:21:04.358 "state": "configuring", 00:21:04.358 "raid_level": "raid0", 00:21:04.358 "superblock": false, 00:21:04.358 "num_base_bdevs": 4, 00:21:04.358 "num_base_bdevs_discovered": 3, 00:21:04.358 "num_base_bdevs_operational": 4, 00:21:04.358 "base_bdevs_list": [ 00:21:04.358 { 00:21:04.358 "name": null, 00:21:04.358 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:04.358 "is_configured": false, 00:21:04.358 "data_offset": 0, 00:21:04.358 "data_size": 65536 00:21:04.358 }, 00:21:04.358 { 00:21:04.358 "name": "BaseBdev2", 00:21:04.358 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:04.358 "is_configured": true, 00:21:04.358 "data_offset": 0, 00:21:04.358 "data_size": 65536 00:21:04.358 }, 00:21:04.358 { 00:21:04.358 "name": "BaseBdev3", 00:21:04.358 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:04.358 "is_configured": true, 00:21:04.358 "data_offset": 0, 00:21:04.358 "data_size": 65536 00:21:04.358 }, 00:21:04.358 { 00:21:04.358 "name": "BaseBdev4", 00:21:04.358 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:04.358 "is_configured": true, 00:21:04.358 "data_offset": 0, 00:21:04.358 "data_size": 65536 00:21:04.358 } 00:21:04.358 ] 00:21:04.358 }' 00:21:04.358 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.358 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.615 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.615 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.615 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.615 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:04.615 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 763b4838-ce4a-4183-8501-a86383326315 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.874 [2024-11-08 17:08:41.404350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:04.874 [2024-11-08 17:08:41.404607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:04.874 [2024-11-08 17:08:41.404622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:04.874 [2024-11-08 17:08:41.404941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:04.874 [2024-11-08 17:08:41.405104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:04.874 [2024-11-08 17:08:41.405117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:04.874 [2024-11-08 17:08:41.405439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.874 NewBaseBdev 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.874 [ 00:21:04.874 { 00:21:04.874 "name": "NewBaseBdev", 00:21:04.874 "aliases": [ 00:21:04.874 "763b4838-ce4a-4183-8501-a86383326315" 00:21:04.874 ], 00:21:04.874 "product_name": "Malloc disk", 00:21:04.874 "block_size": 512, 00:21:04.874 "num_blocks": 65536, 00:21:04.874 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:04.874 "assigned_rate_limits": { 00:21:04.874 "rw_ios_per_sec": 0, 00:21:04.874 "rw_mbytes_per_sec": 0, 00:21:04.874 "r_mbytes_per_sec": 0, 00:21:04.874 "w_mbytes_per_sec": 0 00:21:04.874 }, 00:21:04.874 "claimed": true, 00:21:04.874 "claim_type": "exclusive_write", 00:21:04.874 "zoned": false, 00:21:04.874 "supported_io_types": { 00:21:04.874 "read": true, 00:21:04.874 "write": true, 00:21:04.874 "unmap": true, 00:21:04.874 "flush": true, 00:21:04.874 "reset": true, 00:21:04.874 "nvme_admin": false, 00:21:04.874 "nvme_io": false, 00:21:04.874 "nvme_io_md": false, 00:21:04.874 "write_zeroes": true, 00:21:04.874 "zcopy": true, 00:21:04.874 "get_zone_info": false, 00:21:04.874 "zone_management": false, 00:21:04.874 "zone_append": false, 00:21:04.874 "compare": false, 00:21:04.874 "compare_and_write": false, 00:21:04.874 "abort": true, 00:21:04.874 "seek_hole": false, 00:21:04.874 "seek_data": false, 00:21:04.874 "copy": true, 00:21:04.874 "nvme_iov_md": false 00:21:04.874 }, 00:21:04.874 "memory_domains": [ 00:21:04.874 { 00:21:04.874 "dma_device_id": "system", 00:21:04.874 "dma_device_type": 1 00:21:04.874 }, 00:21:04.874 { 00:21:04.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.874 "dma_device_type": 2 00:21:04.874 } 00:21:04.874 ], 00:21:04.874 "driver_specific": {} 00:21:04.874 } 00:21:04.874 ] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.874 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.874 "name": "Existed_Raid", 00:21:04.874 "uuid": "b330c1e1-9b59-4f30-b374-48e2ebf104a6", 00:21:04.874 "strip_size_kb": 64, 00:21:04.874 "state": "online", 00:21:04.874 "raid_level": "raid0", 00:21:04.874 "superblock": false, 00:21:04.874 "num_base_bdevs": 4, 00:21:04.874 "num_base_bdevs_discovered": 4, 00:21:04.874 "num_base_bdevs_operational": 4, 00:21:04.874 "base_bdevs_list": [ 00:21:04.874 { 00:21:04.874 "name": "NewBaseBdev", 00:21:04.874 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:04.874 "is_configured": true, 00:21:04.874 "data_offset": 0, 00:21:04.874 "data_size": 65536 00:21:04.874 }, 00:21:04.874 { 00:21:04.874 "name": "BaseBdev2", 00:21:04.874 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:04.874 "is_configured": true, 00:21:04.874 "data_offset": 0, 00:21:04.874 "data_size": 65536 00:21:04.874 }, 00:21:04.874 { 00:21:04.874 "name": "BaseBdev3", 00:21:04.874 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:04.874 "is_configured": true, 00:21:04.875 "data_offset": 0, 00:21:04.875 "data_size": 65536 00:21:04.875 }, 00:21:04.875 { 00:21:04.875 "name": "BaseBdev4", 00:21:04.875 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:04.875 "is_configured": true, 00:21:04.875 "data_offset": 0, 00:21:04.875 "data_size": 65536 00:21:04.875 } 00:21:04.875 ] 00:21:04.875 }' 00:21:04.875 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.875 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:05.132 [2024-11-08 17:08:41.789006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.132 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:05.132 "name": "Existed_Raid", 00:21:05.132 "aliases": [ 00:21:05.132 "b330c1e1-9b59-4f30-b374-48e2ebf104a6" 00:21:05.132 ], 00:21:05.132 "product_name": "Raid Volume", 00:21:05.133 "block_size": 512, 00:21:05.133 "num_blocks": 262144, 00:21:05.133 "uuid": "b330c1e1-9b59-4f30-b374-48e2ebf104a6", 00:21:05.133 "assigned_rate_limits": { 00:21:05.133 "rw_ios_per_sec": 0, 00:21:05.133 "rw_mbytes_per_sec": 0, 00:21:05.133 "r_mbytes_per_sec": 0, 00:21:05.133 "w_mbytes_per_sec": 0 00:21:05.133 }, 00:21:05.133 "claimed": false, 00:21:05.133 "zoned": false, 00:21:05.133 "supported_io_types": { 00:21:05.133 "read": true, 00:21:05.133 "write": true, 00:21:05.133 "unmap": true, 00:21:05.133 "flush": true, 00:21:05.133 "reset": true, 00:21:05.133 "nvme_admin": false, 00:21:05.133 "nvme_io": false, 00:21:05.133 "nvme_io_md": false, 00:21:05.133 "write_zeroes": true, 00:21:05.133 "zcopy": false, 00:21:05.133 "get_zone_info": false, 00:21:05.133 "zone_management": false, 00:21:05.133 "zone_append": false, 00:21:05.133 "compare": false, 00:21:05.133 "compare_and_write": false, 00:21:05.133 "abort": false, 00:21:05.133 "seek_hole": false, 00:21:05.133 "seek_data": false, 00:21:05.133 "copy": false, 00:21:05.133 "nvme_iov_md": false 00:21:05.133 }, 00:21:05.133 "memory_domains": [ 00:21:05.133 { 00:21:05.133 "dma_device_id": "system", 00:21:05.133 "dma_device_type": 1 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.133 "dma_device_type": 2 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "dma_device_id": "system", 00:21:05.133 "dma_device_type": 1 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.133 "dma_device_type": 2 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "dma_device_id": "system", 00:21:05.133 "dma_device_type": 1 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.133 "dma_device_type": 2 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "dma_device_id": "system", 00:21:05.133 "dma_device_type": 1 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.133 "dma_device_type": 2 00:21:05.133 } 00:21:05.133 ], 00:21:05.133 "driver_specific": { 00:21:05.133 "raid": { 00:21:05.133 "uuid": "b330c1e1-9b59-4f30-b374-48e2ebf104a6", 00:21:05.133 "strip_size_kb": 64, 00:21:05.133 "state": "online", 00:21:05.133 "raid_level": "raid0", 00:21:05.133 "superblock": false, 00:21:05.133 "num_base_bdevs": 4, 00:21:05.133 "num_base_bdevs_discovered": 4, 00:21:05.133 "num_base_bdevs_operational": 4, 00:21:05.133 "base_bdevs_list": [ 00:21:05.133 { 00:21:05.133 "name": "NewBaseBdev", 00:21:05.133 "uuid": "763b4838-ce4a-4183-8501-a86383326315", 00:21:05.133 "is_configured": true, 00:21:05.133 "data_offset": 0, 00:21:05.133 "data_size": 65536 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "name": "BaseBdev2", 00:21:05.133 "uuid": "628f75ce-e656-4959-911a-e8dbd267dabe", 00:21:05.133 "is_configured": true, 00:21:05.133 "data_offset": 0, 00:21:05.133 "data_size": 65536 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "name": "BaseBdev3", 00:21:05.133 "uuid": "53270531-b273-4fc5-abac-84c6495d047e", 00:21:05.133 "is_configured": true, 00:21:05.133 "data_offset": 0, 00:21:05.133 "data_size": 65536 00:21:05.133 }, 00:21:05.133 { 00:21:05.133 "name": "BaseBdev4", 00:21:05.133 "uuid": "cd20e18d-7078-4051-8aba-d0069dad71a0", 00:21:05.133 "is_configured": true, 00:21:05.133 "data_offset": 0, 00:21:05.133 "data_size": 65536 00:21:05.133 } 00:21:05.133 ] 00:21:05.133 } 00:21:05.133 } 00:21:05.133 }' 00:21:05.133 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:05.391 BaseBdev2 00:21:05.391 BaseBdev3 00:21:05.391 BaseBdev4' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.391 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.392 17:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.392 [2024-11-08 17:08:42.012665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:05.392 [2024-11-08 17:08:42.012873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.392 [2024-11-08 17:08:42.013021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.392 [2024-11-08 17:08:42.013149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.392 [2024-11-08 17:08:42.013171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 68033 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 68033 ']' 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 68033 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68033 00:21:05.392 killing process with pid 68033 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68033' 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 68033 00:21:05.392 17:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 68033 00:21:05.392 [2024-11-08 17:08:42.043115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:05.649 [2024-11-08 17:08:42.312107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:06.582 00:21:06.582 real 0m8.921s 00:21:06.582 user 0m14.085s 00:21:06.582 sys 0m1.522s 00:21:06.582 ************************************ 00:21:06.582 END TEST raid_state_function_test 00:21:06.582 ************************************ 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 17:08:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:21:06.582 17:08:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:06.582 17:08:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:06.582 17:08:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 ************************************ 00:21:06.582 START TEST raid_state_function_test_sb 00:21:06.582 ************************************ 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:06.582 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:06.583 Process raid pid: 68676 00:21:06.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68676 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68676' 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68676 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68676 ']' 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:06.583 17:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.583 [2024-11-08 17:08:43.241200] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:06.583 [2024-11-08 17:08:43.241560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.841 [2024-11-08 17:08:43.402808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.841 [2024-11-08 17:08:43.523692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.140 [2024-11-08 17:08:43.677983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:07.140 [2024-11-08 17:08:43.678210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:07.398 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.398 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:21:07.399 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:07.399 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.399 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.399 [2024-11-08 17:08:44.108741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:07.399 [2024-11-08 17:08:44.108918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:07.399 [2024-11-08 17:08:44.108983] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:07.399 [2024-11-08 17:08:44.109000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:07.399 [2024-11-08 17:08:44.109007] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:07.399 [2024-11-08 17:08:44.109015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:07.399 [2024-11-08 17:08:44.109022] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:07.399 [2024-11-08 17:08:44.109031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.657 "name": "Existed_Raid", 00:21:07.657 "uuid": "00b0b0cc-4544-4f90-93d5-ac5d6788b826", 00:21:07.657 "strip_size_kb": 64, 00:21:07.657 "state": "configuring", 00:21:07.657 "raid_level": "raid0", 00:21:07.657 "superblock": true, 00:21:07.657 "num_base_bdevs": 4, 00:21:07.657 "num_base_bdevs_discovered": 0, 00:21:07.657 "num_base_bdevs_operational": 4, 00:21:07.657 "base_bdevs_list": [ 00:21:07.657 { 00:21:07.657 "name": "BaseBdev1", 00:21:07.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.657 "is_configured": false, 00:21:07.657 "data_offset": 0, 00:21:07.657 "data_size": 0 00:21:07.657 }, 00:21:07.657 { 00:21:07.657 "name": "BaseBdev2", 00:21:07.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.657 "is_configured": false, 00:21:07.657 "data_offset": 0, 00:21:07.657 "data_size": 0 00:21:07.657 }, 00:21:07.657 { 00:21:07.657 "name": "BaseBdev3", 00:21:07.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.657 "is_configured": false, 00:21:07.657 "data_offset": 0, 00:21:07.657 "data_size": 0 00:21:07.657 }, 00:21:07.657 { 00:21:07.657 "name": "BaseBdev4", 00:21:07.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.657 "is_configured": false, 00:21:07.657 "data_offset": 0, 00:21:07.657 "data_size": 0 00:21:07.657 } 00:21:07.657 ] 00:21:07.657 }' 00:21:07.657 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.658 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.915 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:07.915 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.915 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.915 [2024-11-08 17:08:44.476525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:07.916 [2024-11-08 17:08:44.476577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.916 [2024-11-08 17:08:44.484525] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:07.916 [2024-11-08 17:08:44.484676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:07.916 [2024-11-08 17:08:44.484739] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:07.916 [2024-11-08 17:08:44.484781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:07.916 [2024-11-08 17:08:44.484827] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:07.916 [2024-11-08 17:08:44.484855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:07.916 [2024-11-08 17:08:44.484875] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:07.916 [2024-11-08 17:08:44.484898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.916 [2024-11-08 17:08:44.519737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.916 BaseBdev1 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.916 [ 00:21:07.916 { 00:21:07.916 "name": "BaseBdev1", 00:21:07.916 "aliases": [ 00:21:07.916 "7ad2568c-db8b-4e21-999d-e4fbdf96f145" 00:21:07.916 ], 00:21:07.916 "product_name": "Malloc disk", 00:21:07.916 "block_size": 512, 00:21:07.916 "num_blocks": 65536, 00:21:07.916 "uuid": "7ad2568c-db8b-4e21-999d-e4fbdf96f145", 00:21:07.916 "assigned_rate_limits": { 00:21:07.916 "rw_ios_per_sec": 0, 00:21:07.916 "rw_mbytes_per_sec": 0, 00:21:07.916 "r_mbytes_per_sec": 0, 00:21:07.916 "w_mbytes_per_sec": 0 00:21:07.916 }, 00:21:07.916 "claimed": true, 00:21:07.916 "claim_type": "exclusive_write", 00:21:07.916 "zoned": false, 00:21:07.916 "supported_io_types": { 00:21:07.916 "read": true, 00:21:07.916 "write": true, 00:21:07.916 "unmap": true, 00:21:07.916 "flush": true, 00:21:07.916 "reset": true, 00:21:07.916 "nvme_admin": false, 00:21:07.916 "nvme_io": false, 00:21:07.916 "nvme_io_md": false, 00:21:07.916 "write_zeroes": true, 00:21:07.916 "zcopy": true, 00:21:07.916 "get_zone_info": false, 00:21:07.916 "zone_management": false, 00:21:07.916 "zone_append": false, 00:21:07.916 "compare": false, 00:21:07.916 "compare_and_write": false, 00:21:07.916 "abort": true, 00:21:07.916 "seek_hole": false, 00:21:07.916 "seek_data": false, 00:21:07.916 "copy": true, 00:21:07.916 "nvme_iov_md": false 00:21:07.916 }, 00:21:07.916 "memory_domains": [ 00:21:07.916 { 00:21:07.916 "dma_device_id": "system", 00:21:07.916 "dma_device_type": 1 00:21:07.916 }, 00:21:07.916 { 00:21:07.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.916 "dma_device_type": 2 00:21:07.916 } 00:21:07.916 ], 00:21:07.916 "driver_specific": {} 00:21:07.916 } 00:21:07.916 ] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.916 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.916 "name": "Existed_Raid", 00:21:07.916 "uuid": "20646cc4-5d2a-45f0-bafc-7178c2ca5a43", 00:21:07.916 "strip_size_kb": 64, 00:21:07.916 "state": "configuring", 00:21:07.917 "raid_level": "raid0", 00:21:07.917 "superblock": true, 00:21:07.917 "num_base_bdevs": 4, 00:21:07.917 "num_base_bdevs_discovered": 1, 00:21:07.917 "num_base_bdevs_operational": 4, 00:21:07.917 "base_bdevs_list": [ 00:21:07.917 { 00:21:07.917 "name": "BaseBdev1", 00:21:07.917 "uuid": "7ad2568c-db8b-4e21-999d-e4fbdf96f145", 00:21:07.917 "is_configured": true, 00:21:07.917 "data_offset": 2048, 00:21:07.917 "data_size": 63488 00:21:07.917 }, 00:21:07.917 { 00:21:07.917 "name": "BaseBdev2", 00:21:07.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.917 "is_configured": false, 00:21:07.917 "data_offset": 0, 00:21:07.917 "data_size": 0 00:21:07.917 }, 00:21:07.917 { 00:21:07.917 "name": "BaseBdev3", 00:21:07.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.917 "is_configured": false, 00:21:07.917 "data_offset": 0, 00:21:07.917 "data_size": 0 00:21:07.917 }, 00:21:07.917 { 00:21:07.917 "name": "BaseBdev4", 00:21:07.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.917 "is_configured": false, 00:21:07.917 "data_offset": 0, 00:21:07.917 "data_size": 0 00:21:07.917 } 00:21:07.917 ] 00:21:07.917 }' 00:21:07.917 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.917 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.175 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:08.175 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.175 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.175 [2024-11-08 17:08:44.879922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:08.175 [2024-11-08 17:08:44.880131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:08.175 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.175 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:08.175 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.175 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.433 [2024-11-08 17:08:44.887985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:08.433 [2024-11-08 17:08:44.890187] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:08.433 [2024-11-08 17:08:44.890328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:08.433 [2024-11-08 17:08:44.890393] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:08.433 [2024-11-08 17:08:44.890426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:08.433 [2024-11-08 17:08:44.890447] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:08.433 [2024-11-08 17:08:44.890497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.433 "name": "Existed_Raid", 00:21:08.433 "uuid": "5f25de35-31de-4b16-8a16-87a84c10df66", 00:21:08.433 "strip_size_kb": 64, 00:21:08.433 "state": "configuring", 00:21:08.433 "raid_level": "raid0", 00:21:08.433 "superblock": true, 00:21:08.433 "num_base_bdevs": 4, 00:21:08.433 "num_base_bdevs_discovered": 1, 00:21:08.433 "num_base_bdevs_operational": 4, 00:21:08.433 "base_bdevs_list": [ 00:21:08.433 { 00:21:08.433 "name": "BaseBdev1", 00:21:08.433 "uuid": "7ad2568c-db8b-4e21-999d-e4fbdf96f145", 00:21:08.433 "is_configured": true, 00:21:08.433 "data_offset": 2048, 00:21:08.433 "data_size": 63488 00:21:08.433 }, 00:21:08.433 { 00:21:08.433 "name": "BaseBdev2", 00:21:08.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.433 "is_configured": false, 00:21:08.433 "data_offset": 0, 00:21:08.433 "data_size": 0 00:21:08.433 }, 00:21:08.433 { 00:21:08.433 "name": "BaseBdev3", 00:21:08.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.433 "is_configured": false, 00:21:08.433 "data_offset": 0, 00:21:08.433 "data_size": 0 00:21:08.433 }, 00:21:08.433 { 00:21:08.433 "name": "BaseBdev4", 00:21:08.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.433 "is_configured": false, 00:21:08.433 "data_offset": 0, 00:21:08.433 "data_size": 0 00:21:08.433 } 00:21:08.433 ] 00:21:08.433 }' 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.433 17:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.692 BaseBdev2 00:21:08.692 [2024-11-08 17:08:45.291072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:08.692 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.693 [ 00:21:08.693 { 00:21:08.693 "name": "BaseBdev2", 00:21:08.693 "aliases": [ 00:21:08.693 "346faaff-17b9-4ff7-961e-e77a49aa1c05" 00:21:08.693 ], 00:21:08.693 "product_name": "Malloc disk", 00:21:08.693 "block_size": 512, 00:21:08.693 "num_blocks": 65536, 00:21:08.693 "uuid": "346faaff-17b9-4ff7-961e-e77a49aa1c05", 00:21:08.693 "assigned_rate_limits": { 00:21:08.693 "rw_ios_per_sec": 0, 00:21:08.693 "rw_mbytes_per_sec": 0, 00:21:08.693 "r_mbytes_per_sec": 0, 00:21:08.693 "w_mbytes_per_sec": 0 00:21:08.693 }, 00:21:08.693 "claimed": true, 00:21:08.693 "claim_type": "exclusive_write", 00:21:08.693 "zoned": false, 00:21:08.693 "supported_io_types": { 00:21:08.693 "read": true, 00:21:08.693 "write": true, 00:21:08.693 "unmap": true, 00:21:08.693 "flush": true, 00:21:08.693 "reset": true, 00:21:08.693 "nvme_admin": false, 00:21:08.693 "nvme_io": false, 00:21:08.693 "nvme_io_md": false, 00:21:08.693 "write_zeroes": true, 00:21:08.693 "zcopy": true, 00:21:08.693 "get_zone_info": false, 00:21:08.693 "zone_management": false, 00:21:08.693 "zone_append": false, 00:21:08.693 "compare": false, 00:21:08.693 "compare_and_write": false, 00:21:08.693 "abort": true, 00:21:08.693 "seek_hole": false, 00:21:08.693 "seek_data": false, 00:21:08.693 "copy": true, 00:21:08.693 "nvme_iov_md": false 00:21:08.693 }, 00:21:08.693 "memory_domains": [ 00:21:08.693 { 00:21:08.693 "dma_device_id": "system", 00:21:08.693 "dma_device_type": 1 00:21:08.693 }, 00:21:08.693 { 00:21:08.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.693 "dma_device_type": 2 00:21:08.693 } 00:21:08.693 ], 00:21:08.693 "driver_specific": {} 00:21:08.693 } 00:21:08.693 ] 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.693 "name": "Existed_Raid", 00:21:08.693 "uuid": "5f25de35-31de-4b16-8a16-87a84c10df66", 00:21:08.693 "strip_size_kb": 64, 00:21:08.693 "state": "configuring", 00:21:08.693 "raid_level": "raid0", 00:21:08.693 "superblock": true, 00:21:08.693 "num_base_bdevs": 4, 00:21:08.693 "num_base_bdevs_discovered": 2, 00:21:08.693 "num_base_bdevs_operational": 4, 00:21:08.693 "base_bdevs_list": [ 00:21:08.693 { 00:21:08.693 "name": "BaseBdev1", 00:21:08.693 "uuid": "7ad2568c-db8b-4e21-999d-e4fbdf96f145", 00:21:08.693 "is_configured": true, 00:21:08.693 "data_offset": 2048, 00:21:08.693 "data_size": 63488 00:21:08.693 }, 00:21:08.693 { 00:21:08.693 "name": "BaseBdev2", 00:21:08.693 "uuid": "346faaff-17b9-4ff7-961e-e77a49aa1c05", 00:21:08.693 "is_configured": true, 00:21:08.693 "data_offset": 2048, 00:21:08.693 "data_size": 63488 00:21:08.693 }, 00:21:08.693 { 00:21:08.693 "name": "BaseBdev3", 00:21:08.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.693 "is_configured": false, 00:21:08.693 "data_offset": 0, 00:21:08.693 "data_size": 0 00:21:08.693 }, 00:21:08.693 { 00:21:08.693 "name": "BaseBdev4", 00:21:08.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.693 "is_configured": false, 00:21:08.693 "data_offset": 0, 00:21:08.693 "data_size": 0 00:21:08.693 } 00:21:08.693 ] 00:21:08.693 }' 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.693 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.957 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:08.957 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.957 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.215 [2024-11-08 17:08:45.680667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.215 BaseBdev3 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.215 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.215 [ 00:21:09.215 { 00:21:09.215 "name": "BaseBdev3", 00:21:09.215 "aliases": [ 00:21:09.215 "8eb2456c-7a16-4a66-aae8-56d3ab8beee1" 00:21:09.215 ], 00:21:09.215 "product_name": "Malloc disk", 00:21:09.215 "block_size": 512, 00:21:09.215 "num_blocks": 65536, 00:21:09.215 "uuid": "8eb2456c-7a16-4a66-aae8-56d3ab8beee1", 00:21:09.215 "assigned_rate_limits": { 00:21:09.215 "rw_ios_per_sec": 0, 00:21:09.215 "rw_mbytes_per_sec": 0, 00:21:09.215 "r_mbytes_per_sec": 0, 00:21:09.215 "w_mbytes_per_sec": 0 00:21:09.215 }, 00:21:09.215 "claimed": true, 00:21:09.215 "claim_type": "exclusive_write", 00:21:09.215 "zoned": false, 00:21:09.215 "supported_io_types": { 00:21:09.215 "read": true, 00:21:09.215 "write": true, 00:21:09.215 "unmap": true, 00:21:09.215 "flush": true, 00:21:09.215 "reset": true, 00:21:09.215 "nvme_admin": false, 00:21:09.215 "nvme_io": false, 00:21:09.215 "nvme_io_md": false, 00:21:09.215 "write_zeroes": true, 00:21:09.215 "zcopy": true, 00:21:09.215 "get_zone_info": false, 00:21:09.215 "zone_management": false, 00:21:09.215 "zone_append": false, 00:21:09.215 "compare": false, 00:21:09.215 "compare_and_write": false, 00:21:09.215 "abort": true, 00:21:09.215 "seek_hole": false, 00:21:09.215 "seek_data": false, 00:21:09.215 "copy": true, 00:21:09.215 "nvme_iov_md": false 00:21:09.215 }, 00:21:09.215 "memory_domains": [ 00:21:09.215 { 00:21:09.215 "dma_device_id": "system", 00:21:09.215 "dma_device_type": 1 00:21:09.215 }, 00:21:09.215 { 00:21:09.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.215 "dma_device_type": 2 00:21:09.215 } 00:21:09.215 ], 00:21:09.215 "driver_specific": {} 00:21:09.215 } 00:21:09.215 ] 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.216 "name": "Existed_Raid", 00:21:09.216 "uuid": "5f25de35-31de-4b16-8a16-87a84c10df66", 00:21:09.216 "strip_size_kb": 64, 00:21:09.216 "state": "configuring", 00:21:09.216 "raid_level": "raid0", 00:21:09.216 "superblock": true, 00:21:09.216 "num_base_bdevs": 4, 00:21:09.216 "num_base_bdevs_discovered": 3, 00:21:09.216 "num_base_bdevs_operational": 4, 00:21:09.216 "base_bdevs_list": [ 00:21:09.216 { 00:21:09.216 "name": "BaseBdev1", 00:21:09.216 "uuid": "7ad2568c-db8b-4e21-999d-e4fbdf96f145", 00:21:09.216 "is_configured": true, 00:21:09.216 "data_offset": 2048, 00:21:09.216 "data_size": 63488 00:21:09.216 }, 00:21:09.216 { 00:21:09.216 "name": "BaseBdev2", 00:21:09.216 "uuid": "346faaff-17b9-4ff7-961e-e77a49aa1c05", 00:21:09.216 "is_configured": true, 00:21:09.216 "data_offset": 2048, 00:21:09.216 "data_size": 63488 00:21:09.216 }, 00:21:09.216 { 00:21:09.216 "name": "BaseBdev3", 00:21:09.216 "uuid": "8eb2456c-7a16-4a66-aae8-56d3ab8beee1", 00:21:09.216 "is_configured": true, 00:21:09.216 "data_offset": 2048, 00:21:09.216 "data_size": 63488 00:21:09.216 }, 00:21:09.216 { 00:21:09.216 "name": "BaseBdev4", 00:21:09.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.216 "is_configured": false, 00:21:09.216 "data_offset": 0, 00:21:09.216 "data_size": 0 00:21:09.216 } 00:21:09.216 ] 00:21:09.216 }' 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.216 17:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.474 [2024-11-08 17:08:46.093722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.474 [2024-11-08 17:08:46.094269] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:09.474 [2024-11-08 17:08:46.094312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:09.474 [2024-11-08 17:08:46.094662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:09.474 BaseBdev4 00:21:09.474 [2024-11-08 17:08:46.094899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:09.474 [2024-11-08 17:08:46.094919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:09.474 [2024-11-08 17:08:46.095061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.474 [ 00:21:09.474 { 00:21:09.474 "name": "BaseBdev4", 00:21:09.474 "aliases": [ 00:21:09.474 "2b974189-48e4-4a66-b2e7-e6dcf5aa19c7" 00:21:09.474 ], 00:21:09.474 "product_name": "Malloc disk", 00:21:09.474 "block_size": 512, 00:21:09.474 "num_blocks": 65536, 00:21:09.474 "uuid": "2b974189-48e4-4a66-b2e7-e6dcf5aa19c7", 00:21:09.474 "assigned_rate_limits": { 00:21:09.474 "rw_ios_per_sec": 0, 00:21:09.474 "rw_mbytes_per_sec": 0, 00:21:09.474 "r_mbytes_per_sec": 0, 00:21:09.474 "w_mbytes_per_sec": 0 00:21:09.474 }, 00:21:09.474 "claimed": true, 00:21:09.474 "claim_type": "exclusive_write", 00:21:09.474 "zoned": false, 00:21:09.474 "supported_io_types": { 00:21:09.474 "read": true, 00:21:09.474 "write": true, 00:21:09.474 "unmap": true, 00:21:09.474 "flush": true, 00:21:09.474 "reset": true, 00:21:09.474 "nvme_admin": false, 00:21:09.474 "nvme_io": false, 00:21:09.474 "nvme_io_md": false, 00:21:09.474 "write_zeroes": true, 00:21:09.474 "zcopy": true, 00:21:09.474 "get_zone_info": false, 00:21:09.474 "zone_management": false, 00:21:09.474 "zone_append": false, 00:21:09.474 "compare": false, 00:21:09.474 "compare_and_write": false, 00:21:09.474 "abort": true, 00:21:09.474 "seek_hole": false, 00:21:09.474 "seek_data": false, 00:21:09.474 "copy": true, 00:21:09.474 "nvme_iov_md": false 00:21:09.474 }, 00:21:09.474 "memory_domains": [ 00:21:09.474 { 00:21:09.474 "dma_device_id": "system", 00:21:09.474 "dma_device_type": 1 00:21:09.474 }, 00:21:09.474 { 00:21:09.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.474 "dma_device_type": 2 00:21:09.474 } 00:21:09.474 ], 00:21:09.474 "driver_specific": {} 00:21:09.474 } 00:21:09.474 ] 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.474 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.474 "name": "Existed_Raid", 00:21:09.474 "uuid": "5f25de35-31de-4b16-8a16-87a84c10df66", 00:21:09.474 "strip_size_kb": 64, 00:21:09.474 "state": "online", 00:21:09.474 "raid_level": "raid0", 00:21:09.475 "superblock": true, 00:21:09.475 "num_base_bdevs": 4, 00:21:09.475 "num_base_bdevs_discovered": 4, 00:21:09.475 "num_base_bdevs_operational": 4, 00:21:09.475 "base_bdevs_list": [ 00:21:09.475 { 00:21:09.475 "name": "BaseBdev1", 00:21:09.475 "uuid": "7ad2568c-db8b-4e21-999d-e4fbdf96f145", 00:21:09.475 "is_configured": true, 00:21:09.475 "data_offset": 2048, 00:21:09.475 "data_size": 63488 00:21:09.475 }, 00:21:09.475 { 00:21:09.475 "name": "BaseBdev2", 00:21:09.475 "uuid": "346faaff-17b9-4ff7-961e-e77a49aa1c05", 00:21:09.475 "is_configured": true, 00:21:09.475 "data_offset": 2048, 00:21:09.475 "data_size": 63488 00:21:09.475 }, 00:21:09.475 { 00:21:09.475 "name": "BaseBdev3", 00:21:09.475 "uuid": "8eb2456c-7a16-4a66-aae8-56d3ab8beee1", 00:21:09.475 "is_configured": true, 00:21:09.475 "data_offset": 2048, 00:21:09.475 "data_size": 63488 00:21:09.475 }, 00:21:09.475 { 00:21:09.475 "name": "BaseBdev4", 00:21:09.475 "uuid": "2b974189-48e4-4a66-b2e7-e6dcf5aa19c7", 00:21:09.475 "is_configured": true, 00:21:09.475 "data_offset": 2048, 00:21:09.475 "data_size": 63488 00:21:09.475 } 00:21:09.475 ] 00:21:09.475 }' 00:21:09.475 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.475 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.040 [2024-11-08 17:08:46.478274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.040 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.040 "name": "Existed_Raid", 00:21:10.040 "aliases": [ 00:21:10.040 "5f25de35-31de-4b16-8a16-87a84c10df66" 00:21:10.040 ], 00:21:10.040 "product_name": "Raid Volume", 00:21:10.040 "block_size": 512, 00:21:10.040 "num_blocks": 253952, 00:21:10.040 "uuid": "5f25de35-31de-4b16-8a16-87a84c10df66", 00:21:10.040 "assigned_rate_limits": { 00:21:10.040 "rw_ios_per_sec": 0, 00:21:10.040 "rw_mbytes_per_sec": 0, 00:21:10.040 "r_mbytes_per_sec": 0, 00:21:10.040 "w_mbytes_per_sec": 0 00:21:10.040 }, 00:21:10.040 "claimed": false, 00:21:10.040 "zoned": false, 00:21:10.040 "supported_io_types": { 00:21:10.040 "read": true, 00:21:10.040 "write": true, 00:21:10.040 "unmap": true, 00:21:10.040 "flush": true, 00:21:10.040 "reset": true, 00:21:10.040 "nvme_admin": false, 00:21:10.040 "nvme_io": false, 00:21:10.040 "nvme_io_md": false, 00:21:10.040 "write_zeroes": true, 00:21:10.040 "zcopy": false, 00:21:10.040 "get_zone_info": false, 00:21:10.040 "zone_management": false, 00:21:10.040 "zone_append": false, 00:21:10.040 "compare": false, 00:21:10.040 "compare_and_write": false, 00:21:10.040 "abort": false, 00:21:10.040 "seek_hole": false, 00:21:10.040 "seek_data": false, 00:21:10.040 "copy": false, 00:21:10.040 "nvme_iov_md": false 00:21:10.040 }, 00:21:10.040 "memory_domains": [ 00:21:10.040 { 00:21:10.040 "dma_device_id": "system", 00:21:10.041 "dma_device_type": 1 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.041 "dma_device_type": 2 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "dma_device_id": "system", 00:21:10.041 "dma_device_type": 1 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.041 "dma_device_type": 2 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "dma_device_id": "system", 00:21:10.041 "dma_device_type": 1 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.041 "dma_device_type": 2 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "dma_device_id": "system", 00:21:10.041 "dma_device_type": 1 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.041 "dma_device_type": 2 00:21:10.041 } 00:21:10.041 ], 00:21:10.041 "driver_specific": { 00:21:10.041 "raid": { 00:21:10.041 "uuid": "5f25de35-31de-4b16-8a16-87a84c10df66", 00:21:10.041 "strip_size_kb": 64, 00:21:10.041 "state": "online", 00:21:10.041 "raid_level": "raid0", 00:21:10.041 "superblock": true, 00:21:10.041 "num_base_bdevs": 4, 00:21:10.041 "num_base_bdevs_discovered": 4, 00:21:10.041 "num_base_bdevs_operational": 4, 00:21:10.041 "base_bdevs_list": [ 00:21:10.041 { 00:21:10.041 "name": "BaseBdev1", 00:21:10.041 "uuid": "7ad2568c-db8b-4e21-999d-e4fbdf96f145", 00:21:10.041 "is_configured": true, 00:21:10.041 "data_offset": 2048, 00:21:10.041 "data_size": 63488 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "name": "BaseBdev2", 00:21:10.041 "uuid": "346faaff-17b9-4ff7-961e-e77a49aa1c05", 00:21:10.041 "is_configured": true, 00:21:10.041 "data_offset": 2048, 00:21:10.041 "data_size": 63488 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "name": "BaseBdev3", 00:21:10.041 "uuid": "8eb2456c-7a16-4a66-aae8-56d3ab8beee1", 00:21:10.041 "is_configured": true, 00:21:10.041 "data_offset": 2048, 00:21:10.041 "data_size": 63488 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "name": "BaseBdev4", 00:21:10.041 "uuid": "2b974189-48e4-4a66-b2e7-e6dcf5aa19c7", 00:21:10.041 "is_configured": true, 00:21:10.041 "data_offset": 2048, 00:21:10.041 "data_size": 63488 00:21:10.041 } 00:21:10.041 ] 00:21:10.041 } 00:21:10.041 } 00:21:10.041 }' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:10.041 BaseBdev2 00:21:10.041 BaseBdev3 00:21:10.041 BaseBdev4' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.041 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.041 [2024-11-08 17:08:46.738034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:10.041 [2024-11-08 17:08:46.738200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:10.041 [2024-11-08 17:08:46.738272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.299 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.299 "name": "Existed_Raid", 00:21:10.299 "uuid": "5f25de35-31de-4b16-8a16-87a84c10df66", 00:21:10.299 "strip_size_kb": 64, 00:21:10.299 "state": "offline", 00:21:10.299 "raid_level": "raid0", 00:21:10.299 "superblock": true, 00:21:10.299 "num_base_bdevs": 4, 00:21:10.299 "num_base_bdevs_discovered": 3, 00:21:10.299 "num_base_bdevs_operational": 3, 00:21:10.299 "base_bdevs_list": [ 00:21:10.299 { 00:21:10.299 "name": null, 00:21:10.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.299 "is_configured": false, 00:21:10.299 "data_offset": 0, 00:21:10.299 "data_size": 63488 00:21:10.299 }, 00:21:10.299 { 00:21:10.300 "name": "BaseBdev2", 00:21:10.300 "uuid": "346faaff-17b9-4ff7-961e-e77a49aa1c05", 00:21:10.300 "is_configured": true, 00:21:10.300 "data_offset": 2048, 00:21:10.300 "data_size": 63488 00:21:10.300 }, 00:21:10.300 { 00:21:10.300 "name": "BaseBdev3", 00:21:10.300 "uuid": "8eb2456c-7a16-4a66-aae8-56d3ab8beee1", 00:21:10.300 "is_configured": true, 00:21:10.300 "data_offset": 2048, 00:21:10.300 "data_size": 63488 00:21:10.300 }, 00:21:10.300 { 00:21:10.300 "name": "BaseBdev4", 00:21:10.300 "uuid": "2b974189-48e4-4a66-b2e7-e6dcf5aa19c7", 00:21:10.300 "is_configured": true, 00:21:10.300 "data_offset": 2048, 00:21:10.300 "data_size": 63488 00:21:10.300 } 00:21:10.300 ] 00:21:10.300 }' 00:21:10.300 17:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.300 17:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.558 [2024-11-08 17:08:47.201999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:10.558 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.824 [2024-11-08 17:08:47.306731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.824 [2024-11-08 17:08:47.411368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:10.824 [2024-11-08 17:08:47.411543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.824 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.825 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.112 BaseBdev2 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.112 [ 00:21:11.112 { 00:21:11.112 "name": "BaseBdev2", 00:21:11.112 "aliases": [ 00:21:11.112 "4a515037-4d4a-448f-9b52-028d9009c3f1" 00:21:11.112 ], 00:21:11.112 "product_name": "Malloc disk", 00:21:11.112 "block_size": 512, 00:21:11.112 "num_blocks": 65536, 00:21:11.112 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:11.112 "assigned_rate_limits": { 00:21:11.112 "rw_ios_per_sec": 0, 00:21:11.112 "rw_mbytes_per_sec": 0, 00:21:11.112 "r_mbytes_per_sec": 0, 00:21:11.112 "w_mbytes_per_sec": 0 00:21:11.112 }, 00:21:11.112 "claimed": false, 00:21:11.112 "zoned": false, 00:21:11.112 "supported_io_types": { 00:21:11.112 "read": true, 00:21:11.112 "write": true, 00:21:11.112 "unmap": true, 00:21:11.112 "flush": true, 00:21:11.112 "reset": true, 00:21:11.112 "nvme_admin": false, 00:21:11.112 "nvme_io": false, 00:21:11.112 "nvme_io_md": false, 00:21:11.112 "write_zeroes": true, 00:21:11.112 "zcopy": true, 00:21:11.112 "get_zone_info": false, 00:21:11.112 "zone_management": false, 00:21:11.112 "zone_append": false, 00:21:11.112 "compare": false, 00:21:11.112 "compare_and_write": false, 00:21:11.112 "abort": true, 00:21:11.112 "seek_hole": false, 00:21:11.112 "seek_data": false, 00:21:11.112 "copy": true, 00:21:11.112 "nvme_iov_md": false 00:21:11.112 }, 00:21:11.112 "memory_domains": [ 00:21:11.112 { 00:21:11.112 "dma_device_id": "system", 00:21:11.112 "dma_device_type": 1 00:21:11.112 }, 00:21:11.112 { 00:21:11.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.112 "dma_device_type": 2 00:21:11.112 } 00:21:11.112 ], 00:21:11.112 "driver_specific": {} 00:21:11.112 } 00:21:11.112 ] 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.112 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 BaseBdev3 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 [ 00:21:11.113 { 00:21:11.113 "name": "BaseBdev3", 00:21:11.113 "aliases": [ 00:21:11.113 "9dceb45f-ea35-4622-b2b7-1f272e95fa78" 00:21:11.113 ], 00:21:11.113 "product_name": "Malloc disk", 00:21:11.113 "block_size": 512, 00:21:11.113 "num_blocks": 65536, 00:21:11.113 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:11.113 "assigned_rate_limits": { 00:21:11.113 "rw_ios_per_sec": 0, 00:21:11.113 "rw_mbytes_per_sec": 0, 00:21:11.113 "r_mbytes_per_sec": 0, 00:21:11.113 "w_mbytes_per_sec": 0 00:21:11.113 }, 00:21:11.113 "claimed": false, 00:21:11.113 "zoned": false, 00:21:11.113 "supported_io_types": { 00:21:11.113 "read": true, 00:21:11.113 "write": true, 00:21:11.113 "unmap": true, 00:21:11.113 "flush": true, 00:21:11.113 "reset": true, 00:21:11.113 "nvme_admin": false, 00:21:11.113 "nvme_io": false, 00:21:11.113 "nvme_io_md": false, 00:21:11.113 "write_zeroes": true, 00:21:11.113 "zcopy": true, 00:21:11.113 "get_zone_info": false, 00:21:11.113 "zone_management": false, 00:21:11.113 "zone_append": false, 00:21:11.113 "compare": false, 00:21:11.113 "compare_and_write": false, 00:21:11.113 "abort": true, 00:21:11.113 "seek_hole": false, 00:21:11.113 "seek_data": false, 00:21:11.113 "copy": true, 00:21:11.113 "nvme_iov_md": false 00:21:11.113 }, 00:21:11.113 "memory_domains": [ 00:21:11.113 { 00:21:11.113 "dma_device_id": "system", 00:21:11.113 "dma_device_type": 1 00:21:11.113 }, 00:21:11.113 { 00:21:11.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.113 "dma_device_type": 2 00:21:11.113 } 00:21:11.113 ], 00:21:11.113 "driver_specific": {} 00:21:11.113 } 00:21:11.113 ] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 BaseBdev4 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 [ 00:21:11.113 { 00:21:11.113 "name": "BaseBdev4", 00:21:11.113 "aliases": [ 00:21:11.113 "5af750ab-ead9-44be-a3d5-879c1e00e96b" 00:21:11.113 ], 00:21:11.113 "product_name": "Malloc disk", 00:21:11.113 "block_size": 512, 00:21:11.113 "num_blocks": 65536, 00:21:11.113 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:11.113 "assigned_rate_limits": { 00:21:11.113 "rw_ios_per_sec": 0, 00:21:11.113 "rw_mbytes_per_sec": 0, 00:21:11.113 "r_mbytes_per_sec": 0, 00:21:11.113 "w_mbytes_per_sec": 0 00:21:11.113 }, 00:21:11.113 "claimed": false, 00:21:11.113 "zoned": false, 00:21:11.113 "supported_io_types": { 00:21:11.113 "read": true, 00:21:11.113 "write": true, 00:21:11.113 "unmap": true, 00:21:11.113 "flush": true, 00:21:11.113 "reset": true, 00:21:11.113 "nvme_admin": false, 00:21:11.113 "nvme_io": false, 00:21:11.113 "nvme_io_md": false, 00:21:11.113 "write_zeroes": true, 00:21:11.113 "zcopy": true, 00:21:11.113 "get_zone_info": false, 00:21:11.113 "zone_management": false, 00:21:11.113 "zone_append": false, 00:21:11.113 "compare": false, 00:21:11.113 "compare_and_write": false, 00:21:11.113 "abort": true, 00:21:11.113 "seek_hole": false, 00:21:11.113 "seek_data": false, 00:21:11.113 "copy": true, 00:21:11.113 "nvme_iov_md": false 00:21:11.113 }, 00:21:11.113 "memory_domains": [ 00:21:11.113 { 00:21:11.113 "dma_device_id": "system", 00:21:11.113 "dma_device_type": 1 00:21:11.113 }, 00:21:11.113 { 00:21:11.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.113 "dma_device_type": 2 00:21:11.113 } 00:21:11.113 ], 00:21:11.113 "driver_specific": {} 00:21:11.113 } 00:21:11.113 ] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.113 [2024-11-08 17:08:47.678590] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.113 [2024-11-08 17:08:47.678750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.113 [2024-11-08 17:08:47.678840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.113 [2024-11-08 17:08:47.680932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:11.113 [2024-11-08 17:08:47.681070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.113 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.114 "name": "Existed_Raid", 00:21:11.114 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:11.114 "strip_size_kb": 64, 00:21:11.114 "state": "configuring", 00:21:11.114 "raid_level": "raid0", 00:21:11.114 "superblock": true, 00:21:11.114 "num_base_bdevs": 4, 00:21:11.114 "num_base_bdevs_discovered": 3, 00:21:11.114 "num_base_bdevs_operational": 4, 00:21:11.114 "base_bdevs_list": [ 00:21:11.114 { 00:21:11.114 "name": "BaseBdev1", 00:21:11.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.114 "is_configured": false, 00:21:11.114 "data_offset": 0, 00:21:11.114 "data_size": 0 00:21:11.114 }, 00:21:11.114 { 00:21:11.114 "name": "BaseBdev2", 00:21:11.114 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:11.114 "is_configured": true, 00:21:11.114 "data_offset": 2048, 00:21:11.114 "data_size": 63488 00:21:11.114 }, 00:21:11.114 { 00:21:11.114 "name": "BaseBdev3", 00:21:11.114 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:11.114 "is_configured": true, 00:21:11.114 "data_offset": 2048, 00:21:11.114 "data_size": 63488 00:21:11.114 }, 00:21:11.114 { 00:21:11.114 "name": "BaseBdev4", 00:21:11.114 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:11.114 "is_configured": true, 00:21:11.114 "data_offset": 2048, 00:21:11.114 "data_size": 63488 00:21:11.114 } 00:21:11.114 ] 00:21:11.114 }' 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.114 17:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 [2024-11-08 17:08:48.046681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.373 "name": "Existed_Raid", 00:21:11.373 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:11.373 "strip_size_kb": 64, 00:21:11.373 "state": "configuring", 00:21:11.373 "raid_level": "raid0", 00:21:11.373 "superblock": true, 00:21:11.373 "num_base_bdevs": 4, 00:21:11.373 "num_base_bdevs_discovered": 2, 00:21:11.373 "num_base_bdevs_operational": 4, 00:21:11.373 "base_bdevs_list": [ 00:21:11.373 { 00:21:11.373 "name": "BaseBdev1", 00:21:11.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.373 "is_configured": false, 00:21:11.373 "data_offset": 0, 00:21:11.373 "data_size": 0 00:21:11.373 }, 00:21:11.373 { 00:21:11.373 "name": null, 00:21:11.373 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:11.373 "is_configured": false, 00:21:11.373 "data_offset": 0, 00:21:11.373 "data_size": 63488 00:21:11.373 }, 00:21:11.373 { 00:21:11.373 "name": "BaseBdev3", 00:21:11.373 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:11.373 "is_configured": true, 00:21:11.373 "data_offset": 2048, 00:21:11.373 "data_size": 63488 00:21:11.373 }, 00:21:11.373 { 00:21:11.373 "name": "BaseBdev4", 00:21:11.373 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:11.373 "is_configured": true, 00:21:11.373 "data_offset": 2048, 00:21:11.373 "data_size": 63488 00:21:11.373 } 00:21:11.373 ] 00:21:11.373 }' 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.373 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.939 BaseBdev1 00:21:11.939 [2024-11-08 17:08:48.456003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:11.939 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.940 [ 00:21:11.940 { 00:21:11.940 "name": "BaseBdev1", 00:21:11.940 "aliases": [ 00:21:11.940 "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1" 00:21:11.940 ], 00:21:11.940 "product_name": "Malloc disk", 00:21:11.940 "block_size": 512, 00:21:11.940 "num_blocks": 65536, 00:21:11.940 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:11.940 "assigned_rate_limits": { 00:21:11.940 "rw_ios_per_sec": 0, 00:21:11.940 "rw_mbytes_per_sec": 0, 00:21:11.940 "r_mbytes_per_sec": 0, 00:21:11.940 "w_mbytes_per_sec": 0 00:21:11.940 }, 00:21:11.940 "claimed": true, 00:21:11.940 "claim_type": "exclusive_write", 00:21:11.940 "zoned": false, 00:21:11.940 "supported_io_types": { 00:21:11.940 "read": true, 00:21:11.940 "write": true, 00:21:11.940 "unmap": true, 00:21:11.940 "flush": true, 00:21:11.940 "reset": true, 00:21:11.940 "nvme_admin": false, 00:21:11.940 "nvme_io": false, 00:21:11.940 "nvme_io_md": false, 00:21:11.940 "write_zeroes": true, 00:21:11.940 "zcopy": true, 00:21:11.940 "get_zone_info": false, 00:21:11.940 "zone_management": false, 00:21:11.940 "zone_append": false, 00:21:11.940 "compare": false, 00:21:11.940 "compare_and_write": false, 00:21:11.940 "abort": true, 00:21:11.940 "seek_hole": false, 00:21:11.940 "seek_data": false, 00:21:11.940 "copy": true, 00:21:11.940 "nvme_iov_md": false 00:21:11.940 }, 00:21:11.940 "memory_domains": [ 00:21:11.940 { 00:21:11.940 "dma_device_id": "system", 00:21:11.940 "dma_device_type": 1 00:21:11.940 }, 00:21:11.940 { 00:21:11.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.940 "dma_device_type": 2 00:21:11.940 } 00:21:11.940 ], 00:21:11.940 "driver_specific": {} 00:21:11.940 } 00:21:11.940 ] 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.940 "name": "Existed_Raid", 00:21:11.940 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:11.940 "strip_size_kb": 64, 00:21:11.940 "state": "configuring", 00:21:11.940 "raid_level": "raid0", 00:21:11.940 "superblock": true, 00:21:11.940 "num_base_bdevs": 4, 00:21:11.940 "num_base_bdevs_discovered": 3, 00:21:11.940 "num_base_bdevs_operational": 4, 00:21:11.940 "base_bdevs_list": [ 00:21:11.940 { 00:21:11.940 "name": "BaseBdev1", 00:21:11.940 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:11.940 "is_configured": true, 00:21:11.940 "data_offset": 2048, 00:21:11.940 "data_size": 63488 00:21:11.940 }, 00:21:11.940 { 00:21:11.940 "name": null, 00:21:11.940 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:11.940 "is_configured": false, 00:21:11.940 "data_offset": 0, 00:21:11.940 "data_size": 63488 00:21:11.940 }, 00:21:11.940 { 00:21:11.940 "name": "BaseBdev3", 00:21:11.940 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:11.940 "is_configured": true, 00:21:11.940 "data_offset": 2048, 00:21:11.940 "data_size": 63488 00:21:11.940 }, 00:21:11.940 { 00:21:11.940 "name": "BaseBdev4", 00:21:11.940 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:11.940 "is_configured": true, 00:21:11.940 "data_offset": 2048, 00:21:11.940 "data_size": 63488 00:21:11.940 } 00:21:11.940 ] 00:21:11.940 }' 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.940 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.198 [2024-11-08 17:08:48.860198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.198 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.199 "name": "Existed_Raid", 00:21:12.199 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:12.199 "strip_size_kb": 64, 00:21:12.199 "state": "configuring", 00:21:12.199 "raid_level": "raid0", 00:21:12.199 "superblock": true, 00:21:12.199 "num_base_bdevs": 4, 00:21:12.199 "num_base_bdevs_discovered": 2, 00:21:12.199 "num_base_bdevs_operational": 4, 00:21:12.199 "base_bdevs_list": [ 00:21:12.199 { 00:21:12.199 "name": "BaseBdev1", 00:21:12.199 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:12.199 "is_configured": true, 00:21:12.199 "data_offset": 2048, 00:21:12.199 "data_size": 63488 00:21:12.199 }, 00:21:12.199 { 00:21:12.199 "name": null, 00:21:12.199 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:12.199 "is_configured": false, 00:21:12.199 "data_offset": 0, 00:21:12.199 "data_size": 63488 00:21:12.199 }, 00:21:12.199 { 00:21:12.199 "name": null, 00:21:12.199 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:12.199 "is_configured": false, 00:21:12.199 "data_offset": 0, 00:21:12.199 "data_size": 63488 00:21:12.199 }, 00:21:12.199 { 00:21:12.199 "name": "BaseBdev4", 00:21:12.199 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:12.199 "is_configured": true, 00:21:12.199 "data_offset": 2048, 00:21:12.199 "data_size": 63488 00:21:12.199 } 00:21:12.199 ] 00:21:12.199 }' 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.199 17:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.775 [2024-11-08 17:08:49.300348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.775 "name": "Existed_Raid", 00:21:12.775 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:12.775 "strip_size_kb": 64, 00:21:12.775 "state": "configuring", 00:21:12.775 "raid_level": "raid0", 00:21:12.775 "superblock": true, 00:21:12.775 "num_base_bdevs": 4, 00:21:12.775 "num_base_bdevs_discovered": 3, 00:21:12.775 "num_base_bdevs_operational": 4, 00:21:12.775 "base_bdevs_list": [ 00:21:12.775 { 00:21:12.775 "name": "BaseBdev1", 00:21:12.775 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:12.775 "is_configured": true, 00:21:12.775 "data_offset": 2048, 00:21:12.775 "data_size": 63488 00:21:12.775 }, 00:21:12.775 { 00:21:12.775 "name": null, 00:21:12.775 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:12.775 "is_configured": false, 00:21:12.775 "data_offset": 0, 00:21:12.775 "data_size": 63488 00:21:12.775 }, 00:21:12.775 { 00:21:12.775 "name": "BaseBdev3", 00:21:12.775 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:12.775 "is_configured": true, 00:21:12.775 "data_offset": 2048, 00:21:12.775 "data_size": 63488 00:21:12.775 }, 00:21:12.775 { 00:21:12.775 "name": "BaseBdev4", 00:21:12.775 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:12.775 "is_configured": true, 00:21:12.775 "data_offset": 2048, 00:21:12.775 "data_size": 63488 00:21:12.775 } 00:21:12.775 ] 00:21:12.775 }' 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.775 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.033 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.033 [2024-11-08 17:08:49.680451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.295 "name": "Existed_Raid", 00:21:13.295 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:13.295 "strip_size_kb": 64, 00:21:13.295 "state": "configuring", 00:21:13.295 "raid_level": "raid0", 00:21:13.295 "superblock": true, 00:21:13.295 "num_base_bdevs": 4, 00:21:13.295 "num_base_bdevs_discovered": 2, 00:21:13.295 "num_base_bdevs_operational": 4, 00:21:13.295 "base_bdevs_list": [ 00:21:13.295 { 00:21:13.295 "name": null, 00:21:13.295 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:13.295 "is_configured": false, 00:21:13.295 "data_offset": 0, 00:21:13.295 "data_size": 63488 00:21:13.295 }, 00:21:13.295 { 00:21:13.295 "name": null, 00:21:13.295 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:13.295 "is_configured": false, 00:21:13.295 "data_offset": 0, 00:21:13.295 "data_size": 63488 00:21:13.295 }, 00:21:13.295 { 00:21:13.295 "name": "BaseBdev3", 00:21:13.295 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:13.295 "is_configured": true, 00:21:13.295 "data_offset": 2048, 00:21:13.295 "data_size": 63488 00:21:13.295 }, 00:21:13.295 { 00:21:13.295 "name": "BaseBdev4", 00:21:13.295 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:13.295 "is_configured": true, 00:21:13.295 "data_offset": 2048, 00:21:13.295 "data_size": 63488 00:21:13.295 } 00:21:13.295 ] 00:21:13.295 }' 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.295 17:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.553 [2024-11-08 17:08:50.121838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.553 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.554 "name": "Existed_Raid", 00:21:13.554 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:13.554 "strip_size_kb": 64, 00:21:13.554 "state": "configuring", 00:21:13.554 "raid_level": "raid0", 00:21:13.554 "superblock": true, 00:21:13.554 "num_base_bdevs": 4, 00:21:13.554 "num_base_bdevs_discovered": 3, 00:21:13.554 "num_base_bdevs_operational": 4, 00:21:13.554 "base_bdevs_list": [ 00:21:13.554 { 00:21:13.554 "name": null, 00:21:13.554 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:13.554 "is_configured": false, 00:21:13.554 "data_offset": 0, 00:21:13.554 "data_size": 63488 00:21:13.554 }, 00:21:13.554 { 00:21:13.554 "name": "BaseBdev2", 00:21:13.554 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:13.554 "is_configured": true, 00:21:13.554 "data_offset": 2048, 00:21:13.554 "data_size": 63488 00:21:13.554 }, 00:21:13.554 { 00:21:13.554 "name": "BaseBdev3", 00:21:13.554 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:13.554 "is_configured": true, 00:21:13.554 "data_offset": 2048, 00:21:13.554 "data_size": 63488 00:21:13.554 }, 00:21:13.554 { 00:21:13.554 "name": "BaseBdev4", 00:21:13.554 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:13.554 "is_configured": true, 00:21:13.554 "data_offset": 2048, 00:21:13.554 "data_size": 63488 00:21:13.554 } 00:21:13.554 ] 00:21:13.554 }' 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.554 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9b3071ff-3e3f-4514-a178-a2a9ea8cddd1 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.812 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.070 [2024-11-08 17:08:50.530789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:14.070 [2024-11-08 17:08:50.531255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:14.070 NewBaseBdev 00:21:14.070 [2024-11-08 17:08:50.531360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:14.070 [2024-11-08 17:08:50.531646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:14.070 [2024-11-08 17:08:50.531803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:14.070 [2024-11-08 17:08:50.531817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:14.070 [2024-11-08 17:08:50.531945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.070 [ 00:21:14.070 { 00:21:14.070 "name": "NewBaseBdev", 00:21:14.070 "aliases": [ 00:21:14.070 "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1" 00:21:14.070 ], 00:21:14.070 "product_name": "Malloc disk", 00:21:14.070 "block_size": 512, 00:21:14.070 "num_blocks": 65536, 00:21:14.070 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:14.070 "assigned_rate_limits": { 00:21:14.070 "rw_ios_per_sec": 0, 00:21:14.070 "rw_mbytes_per_sec": 0, 00:21:14.070 "r_mbytes_per_sec": 0, 00:21:14.070 "w_mbytes_per_sec": 0 00:21:14.070 }, 00:21:14.070 "claimed": true, 00:21:14.070 "claim_type": "exclusive_write", 00:21:14.070 "zoned": false, 00:21:14.070 "supported_io_types": { 00:21:14.070 "read": true, 00:21:14.070 "write": true, 00:21:14.070 "unmap": true, 00:21:14.070 "flush": true, 00:21:14.070 "reset": true, 00:21:14.070 "nvme_admin": false, 00:21:14.070 "nvme_io": false, 00:21:14.070 "nvme_io_md": false, 00:21:14.070 "write_zeroes": true, 00:21:14.070 "zcopy": true, 00:21:14.070 "get_zone_info": false, 00:21:14.070 "zone_management": false, 00:21:14.070 "zone_append": false, 00:21:14.070 "compare": false, 00:21:14.070 "compare_and_write": false, 00:21:14.070 "abort": true, 00:21:14.070 "seek_hole": false, 00:21:14.070 "seek_data": false, 00:21:14.070 "copy": true, 00:21:14.070 "nvme_iov_md": false 00:21:14.070 }, 00:21:14.070 "memory_domains": [ 00:21:14.070 { 00:21:14.070 "dma_device_id": "system", 00:21:14.070 "dma_device_type": 1 00:21:14.070 }, 00:21:14.070 { 00:21:14.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.070 "dma_device_type": 2 00:21:14.070 } 00:21:14.070 ], 00:21:14.070 "driver_specific": {} 00:21:14.070 } 00:21:14.070 ] 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.070 "name": "Existed_Raid", 00:21:14.070 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:14.070 "strip_size_kb": 64, 00:21:14.070 "state": "online", 00:21:14.070 "raid_level": "raid0", 00:21:14.070 "superblock": true, 00:21:14.070 "num_base_bdevs": 4, 00:21:14.070 "num_base_bdevs_discovered": 4, 00:21:14.070 "num_base_bdevs_operational": 4, 00:21:14.070 "base_bdevs_list": [ 00:21:14.070 { 00:21:14.070 "name": "NewBaseBdev", 00:21:14.070 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:14.070 "is_configured": true, 00:21:14.070 "data_offset": 2048, 00:21:14.070 "data_size": 63488 00:21:14.070 }, 00:21:14.070 { 00:21:14.070 "name": "BaseBdev2", 00:21:14.070 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:14.070 "is_configured": true, 00:21:14.070 "data_offset": 2048, 00:21:14.070 "data_size": 63488 00:21:14.070 }, 00:21:14.070 { 00:21:14.070 "name": "BaseBdev3", 00:21:14.070 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:14.070 "is_configured": true, 00:21:14.070 "data_offset": 2048, 00:21:14.070 "data_size": 63488 00:21:14.070 }, 00:21:14.070 { 00:21:14.070 "name": "BaseBdev4", 00:21:14.070 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:14.070 "is_configured": true, 00:21:14.070 "data_offset": 2048, 00:21:14.070 "data_size": 63488 00:21:14.070 } 00:21:14.070 ] 00:21:14.070 }' 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.070 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.329 [2024-11-08 17:08:50.888189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.329 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:14.329 "name": "Existed_Raid", 00:21:14.329 "aliases": [ 00:21:14.329 "d5344c0b-7113-4755-9a25-9ea17c243517" 00:21:14.329 ], 00:21:14.329 "product_name": "Raid Volume", 00:21:14.329 "block_size": 512, 00:21:14.329 "num_blocks": 253952, 00:21:14.329 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:14.329 "assigned_rate_limits": { 00:21:14.329 "rw_ios_per_sec": 0, 00:21:14.329 "rw_mbytes_per_sec": 0, 00:21:14.329 "r_mbytes_per_sec": 0, 00:21:14.329 "w_mbytes_per_sec": 0 00:21:14.329 }, 00:21:14.329 "claimed": false, 00:21:14.329 "zoned": false, 00:21:14.329 "supported_io_types": { 00:21:14.329 "read": true, 00:21:14.329 "write": true, 00:21:14.329 "unmap": true, 00:21:14.329 "flush": true, 00:21:14.329 "reset": true, 00:21:14.329 "nvme_admin": false, 00:21:14.329 "nvme_io": false, 00:21:14.329 "nvme_io_md": false, 00:21:14.329 "write_zeroes": true, 00:21:14.329 "zcopy": false, 00:21:14.329 "get_zone_info": false, 00:21:14.329 "zone_management": false, 00:21:14.329 "zone_append": false, 00:21:14.329 "compare": false, 00:21:14.329 "compare_and_write": false, 00:21:14.329 "abort": false, 00:21:14.329 "seek_hole": false, 00:21:14.329 "seek_data": false, 00:21:14.329 "copy": false, 00:21:14.329 "nvme_iov_md": false 00:21:14.329 }, 00:21:14.329 "memory_domains": [ 00:21:14.329 { 00:21:14.329 "dma_device_id": "system", 00:21:14.329 "dma_device_type": 1 00:21:14.329 }, 00:21:14.329 { 00:21:14.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.330 "dma_device_type": 2 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "dma_device_id": "system", 00:21:14.330 "dma_device_type": 1 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.330 "dma_device_type": 2 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "dma_device_id": "system", 00:21:14.330 "dma_device_type": 1 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.330 "dma_device_type": 2 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "dma_device_id": "system", 00:21:14.330 "dma_device_type": 1 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.330 "dma_device_type": 2 00:21:14.330 } 00:21:14.330 ], 00:21:14.330 "driver_specific": { 00:21:14.330 "raid": { 00:21:14.330 "uuid": "d5344c0b-7113-4755-9a25-9ea17c243517", 00:21:14.330 "strip_size_kb": 64, 00:21:14.330 "state": "online", 00:21:14.330 "raid_level": "raid0", 00:21:14.330 "superblock": true, 00:21:14.330 "num_base_bdevs": 4, 00:21:14.330 "num_base_bdevs_discovered": 4, 00:21:14.330 "num_base_bdevs_operational": 4, 00:21:14.330 "base_bdevs_list": [ 00:21:14.330 { 00:21:14.330 "name": "NewBaseBdev", 00:21:14.330 "uuid": "9b3071ff-3e3f-4514-a178-a2a9ea8cddd1", 00:21:14.330 "is_configured": true, 00:21:14.330 "data_offset": 2048, 00:21:14.330 "data_size": 63488 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "name": "BaseBdev2", 00:21:14.330 "uuid": "4a515037-4d4a-448f-9b52-028d9009c3f1", 00:21:14.330 "is_configured": true, 00:21:14.330 "data_offset": 2048, 00:21:14.330 "data_size": 63488 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "name": "BaseBdev3", 00:21:14.330 "uuid": "9dceb45f-ea35-4622-b2b7-1f272e95fa78", 00:21:14.330 "is_configured": true, 00:21:14.330 "data_offset": 2048, 00:21:14.330 "data_size": 63488 00:21:14.330 }, 00:21:14.330 { 00:21:14.330 "name": "BaseBdev4", 00:21:14.330 "uuid": "5af750ab-ead9-44be-a3d5-879c1e00e96b", 00:21:14.330 "is_configured": true, 00:21:14.330 "data_offset": 2048, 00:21:14.330 "data_size": 63488 00:21:14.330 } 00:21:14.330 ] 00:21:14.330 } 00:21:14.330 } 00:21:14.330 }' 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:14.330 BaseBdev2 00:21:14.330 BaseBdev3 00:21:14.330 BaseBdev4' 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.330 17:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.330 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.587 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.588 [2024-11-08 17:08:51.095825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:14.588 [2024-11-08 17:08:51.096015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.588 [2024-11-08 17:08:51.096161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.588 [2024-11-08 17:08:51.096292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.588 [2024-11-08 17:08:51.096682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68676 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68676 ']' 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68676 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68676 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68676' 00:21:14.588 killing process with pid 68676 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68676 00:21:14.588 [2024-11-08 17:08:51.126326] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.588 17:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68676 00:21:14.847 [2024-11-08 17:08:51.390874] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:15.785 17:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:15.785 00:21:15.785 real 0m8.986s 00:21:15.785 user 0m14.227s 00:21:15.785 sys 0m1.494s 00:21:15.785 17:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:15.785 17:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.785 ************************************ 00:21:15.785 END TEST raid_state_function_test_sb 00:21:15.785 ************************************ 00:21:15.785 17:08:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:21:15.785 17:08:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:15.785 17:08:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:15.785 17:08:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.785 ************************************ 00:21:15.785 START TEST raid_superblock_test 00:21:15.785 ************************************ 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:15.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:15.785 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69319 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69319 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 69319 ']' 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.786 17:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:15.786 [2024-11-08 17:08:52.288991] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:15.786 [2024-11-08 17:08:52.289288] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69319 ] 00:21:15.786 [2024-11-08 17:08:52.453996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.043 [2024-11-08 17:08:52.555171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.043 [2024-11-08 17:08:52.695822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.043 [2024-11-08 17:08:52.696026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 malloc1 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 [2024-11-08 17:08:53.136914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:16.609 [2024-11-08 17:08:53.137096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.609 [2024-11-08 17:08:53.137140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:16.609 [2024-11-08 17:08:53.137538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.609 [2024-11-08 17:08:53.139772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.609 [2024-11-08 17:08:53.139899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:16.609 pt1 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 malloc2 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 [2024-11-08 17:08:53.173140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:16.609 [2024-11-08 17:08:53.173296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.609 [2024-11-08 17:08:53.173324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:16.609 [2024-11-08 17:08:53.173334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.609 [2024-11-08 17:08:53.175477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.609 pt2 00:21:16.609 [2024-11-08 17:08:53.175592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 malloc3 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 [2024-11-08 17:08:53.222844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:16.609 [2024-11-08 17:08:53.223005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.609 [2024-11-08 17:08:53.223049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:16.609 [2024-11-08 17:08:53.223116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.609 [2024-11-08 17:08:53.225220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.609 [2024-11-08 17:08:53.225331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:16.609 pt3 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 malloc4 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.609 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.609 [2024-11-08 17:08:53.267143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:16.609 [2024-11-08 17:08:53.267347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.609 [2024-11-08 17:08:53.267387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:16.610 [2024-11-08 17:08:53.267447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.610 [2024-11-08 17:08:53.269684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.610 [2024-11-08 17:08:53.269820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:16.610 pt4 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.610 [2024-11-08 17:08:53.275172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:16.610 [2024-11-08 17:08:53.277153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:16.610 [2024-11-08 17:08:53.277301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:16.610 [2024-11-08 17:08:53.277388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:16.610 [2024-11-08 17:08:53.277611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:16.610 [2024-11-08 17:08:53.277697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:16.610 [2024-11-08 17:08:53.278004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:16.610 [2024-11-08 17:08:53.278160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:16.610 [2024-11-08 17:08:53.278172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:16.610 [2024-11-08 17:08:53.278318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.610 "name": "raid_bdev1", 00:21:16.610 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:16.610 "strip_size_kb": 64, 00:21:16.610 "state": "online", 00:21:16.610 "raid_level": "raid0", 00:21:16.610 "superblock": true, 00:21:16.610 "num_base_bdevs": 4, 00:21:16.610 "num_base_bdevs_discovered": 4, 00:21:16.610 "num_base_bdevs_operational": 4, 00:21:16.610 "base_bdevs_list": [ 00:21:16.610 { 00:21:16.610 "name": "pt1", 00:21:16.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:16.610 "is_configured": true, 00:21:16.610 "data_offset": 2048, 00:21:16.610 "data_size": 63488 00:21:16.610 }, 00:21:16.610 { 00:21:16.610 "name": "pt2", 00:21:16.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:16.610 "is_configured": true, 00:21:16.610 "data_offset": 2048, 00:21:16.610 "data_size": 63488 00:21:16.610 }, 00:21:16.610 { 00:21:16.610 "name": "pt3", 00:21:16.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:16.610 "is_configured": true, 00:21:16.610 "data_offset": 2048, 00:21:16.610 "data_size": 63488 00:21:16.610 }, 00:21:16.610 { 00:21:16.610 "name": "pt4", 00:21:16.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:16.610 "is_configured": true, 00:21:16.610 "data_offset": 2048, 00:21:16.610 "data_size": 63488 00:21:16.610 } 00:21:16.610 ] 00:21:16.610 }' 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.610 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.176 [2024-11-08 17:08:53.611566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:17.176 "name": "raid_bdev1", 00:21:17.176 "aliases": [ 00:21:17.176 "c2dd9b03-99c8-437b-9705-53ddef42c169" 00:21:17.176 ], 00:21:17.176 "product_name": "Raid Volume", 00:21:17.176 "block_size": 512, 00:21:17.176 "num_blocks": 253952, 00:21:17.176 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:17.176 "assigned_rate_limits": { 00:21:17.176 "rw_ios_per_sec": 0, 00:21:17.176 "rw_mbytes_per_sec": 0, 00:21:17.176 "r_mbytes_per_sec": 0, 00:21:17.176 "w_mbytes_per_sec": 0 00:21:17.176 }, 00:21:17.176 "claimed": false, 00:21:17.176 "zoned": false, 00:21:17.176 "supported_io_types": { 00:21:17.176 "read": true, 00:21:17.176 "write": true, 00:21:17.176 "unmap": true, 00:21:17.176 "flush": true, 00:21:17.176 "reset": true, 00:21:17.176 "nvme_admin": false, 00:21:17.176 "nvme_io": false, 00:21:17.176 "nvme_io_md": false, 00:21:17.176 "write_zeroes": true, 00:21:17.176 "zcopy": false, 00:21:17.176 "get_zone_info": false, 00:21:17.176 "zone_management": false, 00:21:17.176 "zone_append": false, 00:21:17.176 "compare": false, 00:21:17.176 "compare_and_write": false, 00:21:17.176 "abort": false, 00:21:17.176 "seek_hole": false, 00:21:17.176 "seek_data": false, 00:21:17.176 "copy": false, 00:21:17.176 "nvme_iov_md": false 00:21:17.176 }, 00:21:17.176 "memory_domains": [ 00:21:17.176 { 00:21:17.176 "dma_device_id": "system", 00:21:17.176 "dma_device_type": 1 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.176 "dma_device_type": 2 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "dma_device_id": "system", 00:21:17.176 "dma_device_type": 1 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.176 "dma_device_type": 2 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "dma_device_id": "system", 00:21:17.176 "dma_device_type": 1 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.176 "dma_device_type": 2 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "dma_device_id": "system", 00:21:17.176 "dma_device_type": 1 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.176 "dma_device_type": 2 00:21:17.176 } 00:21:17.176 ], 00:21:17.176 "driver_specific": { 00:21:17.176 "raid": { 00:21:17.176 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:17.176 "strip_size_kb": 64, 00:21:17.176 "state": "online", 00:21:17.176 "raid_level": "raid0", 00:21:17.176 "superblock": true, 00:21:17.176 "num_base_bdevs": 4, 00:21:17.176 "num_base_bdevs_discovered": 4, 00:21:17.176 "num_base_bdevs_operational": 4, 00:21:17.176 "base_bdevs_list": [ 00:21:17.176 { 00:21:17.176 "name": "pt1", 00:21:17.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:17.176 "is_configured": true, 00:21:17.176 "data_offset": 2048, 00:21:17.176 "data_size": 63488 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "name": "pt2", 00:21:17.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.176 "is_configured": true, 00:21:17.176 "data_offset": 2048, 00:21:17.176 "data_size": 63488 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "name": "pt3", 00:21:17.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.176 "is_configured": true, 00:21:17.176 "data_offset": 2048, 00:21:17.176 "data_size": 63488 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "name": "pt4", 00:21:17.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:17.176 "is_configured": true, 00:21:17.176 "data_offset": 2048, 00:21:17.176 "data_size": 63488 00:21:17.176 } 00:21:17.176 ] 00:21:17.176 } 00:21:17.176 } 00:21:17.176 }' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:17.176 pt2 00:21:17.176 pt3 00:21:17.176 pt4' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:17.176 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.177 [2024-11-08 17:08:53.843580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c2dd9b03-99c8-437b-9705-53ddef42c169 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c2dd9b03-99c8-437b-9705-53ddef42c169 ']' 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.177 [2024-11-08 17:08:53.875265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.177 [2024-11-08 17:08:53.875382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.177 [2024-11-08 17:08:53.875504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.177 [2024-11-08 17:08:53.875625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.177 [2024-11-08 17:08:53.875710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.177 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 [2024-11-08 17:08:53.987315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:17.437 [2024-11-08 17:08:53.989297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:17.437 [2024-11-08 17:08:53.989344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:17.437 [2024-11-08 17:08:53.989378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:17.437 [2024-11-08 17:08:53.989423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:17.437 [2024-11-08 17:08:53.989473] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:17.437 [2024-11-08 17:08:53.989492] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:17.437 [2024-11-08 17:08:53.989510] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:17.437 [2024-11-08 17:08:53.989523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.437 [2024-11-08 17:08:53.989536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:17.437 request: 00:21:17.437 { 00:21:17.437 "name": "raid_bdev1", 00:21:17.437 "raid_level": "raid0", 00:21:17.437 "base_bdevs": [ 00:21:17.437 "malloc1", 00:21:17.437 "malloc2", 00:21:17.437 "malloc3", 00:21:17.437 "malloc4" 00:21:17.437 ], 00:21:17.437 "strip_size_kb": 64, 00:21:17.437 "superblock": false, 00:21:17.437 "method": "bdev_raid_create", 00:21:17.437 "req_id": 1 00:21:17.437 } 00:21:17.437 Got JSON-RPC error response 00:21:17.437 response: 00:21:17.437 { 00:21:17.437 "code": -17, 00:21:17.437 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:17.437 } 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 17:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.437 [2024-11-08 17:08:54.031300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:17.437 [2024-11-08 17:08:54.031455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.437 [2024-11-08 17:08:54.031493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:17.437 [2024-11-08 17:08:54.031548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.437 [2024-11-08 17:08:54.033814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.437 [2024-11-08 17:08:54.033928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:17.437 [2024-11-08 17:08:54.034054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:17.437 [2024-11-08 17:08:54.034134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:17.437 pt1 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:17.437 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.438 "name": "raid_bdev1", 00:21:17.438 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:17.438 "strip_size_kb": 64, 00:21:17.438 "state": "configuring", 00:21:17.438 "raid_level": "raid0", 00:21:17.438 "superblock": true, 00:21:17.438 "num_base_bdevs": 4, 00:21:17.438 "num_base_bdevs_discovered": 1, 00:21:17.438 "num_base_bdevs_operational": 4, 00:21:17.438 "base_bdevs_list": [ 00:21:17.438 { 00:21:17.438 "name": "pt1", 00:21:17.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:17.438 "is_configured": true, 00:21:17.438 "data_offset": 2048, 00:21:17.438 "data_size": 63488 00:21:17.438 }, 00:21:17.438 { 00:21:17.438 "name": null, 00:21:17.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.438 "is_configured": false, 00:21:17.438 "data_offset": 2048, 00:21:17.438 "data_size": 63488 00:21:17.438 }, 00:21:17.438 { 00:21:17.438 "name": null, 00:21:17.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.438 "is_configured": false, 00:21:17.438 "data_offset": 2048, 00:21:17.438 "data_size": 63488 00:21:17.438 }, 00:21:17.438 { 00:21:17.438 "name": null, 00:21:17.438 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:17.438 "is_configured": false, 00:21:17.438 "data_offset": 2048, 00:21:17.438 "data_size": 63488 00:21:17.438 } 00:21:17.438 ] 00:21:17.438 }' 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.438 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.699 [2024-11-08 17:08:54.355430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:17.699 [2024-11-08 17:08:54.355649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.699 [2024-11-08 17:08:54.355677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:17.699 [2024-11-08 17:08:54.355690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.699 [2024-11-08 17:08:54.356196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.699 [2024-11-08 17:08:54.356224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:17.699 [2024-11-08 17:08:54.356313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:17.699 [2024-11-08 17:08:54.356339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:17.699 pt2 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.699 [2024-11-08 17:08:54.363428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.699 "name": "raid_bdev1", 00:21:17.699 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:17.699 "strip_size_kb": 64, 00:21:17.699 "state": "configuring", 00:21:17.699 "raid_level": "raid0", 00:21:17.699 "superblock": true, 00:21:17.699 "num_base_bdevs": 4, 00:21:17.699 "num_base_bdevs_discovered": 1, 00:21:17.699 "num_base_bdevs_operational": 4, 00:21:17.699 "base_bdevs_list": [ 00:21:17.699 { 00:21:17.699 "name": "pt1", 00:21:17.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:17.699 "is_configured": true, 00:21:17.699 "data_offset": 2048, 00:21:17.699 "data_size": 63488 00:21:17.699 }, 00:21:17.699 { 00:21:17.699 "name": null, 00:21:17.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.699 "is_configured": false, 00:21:17.699 "data_offset": 0, 00:21:17.699 "data_size": 63488 00:21:17.699 }, 00:21:17.699 { 00:21:17.699 "name": null, 00:21:17.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.699 "is_configured": false, 00:21:17.699 "data_offset": 2048, 00:21:17.699 "data_size": 63488 00:21:17.699 }, 00:21:17.699 { 00:21:17.699 "name": null, 00:21:17.699 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:17.699 "is_configured": false, 00:21:17.699 "data_offset": 2048, 00:21:17.699 "data_size": 63488 00:21:17.699 } 00:21:17.699 ] 00:21:17.699 }' 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.699 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.267 [2024-11-08 17:08:54.691477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:18.267 [2024-11-08 17:08:54.691691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.267 [2024-11-08 17:08:54.691730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:18.267 [2024-11-08 17:08:54.691794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.267 [2024-11-08 17:08:54.692210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.267 [2024-11-08 17:08:54.692233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:18.267 [2024-11-08 17:08:54.692307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:18.267 [2024-11-08 17:08:54.692326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:18.267 pt2 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.267 [2024-11-08 17:08:54.703469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:18.267 [2024-11-08 17:08:54.703635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.267 [2024-11-08 17:08:54.703677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:18.267 [2024-11-08 17:08:54.703808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.267 [2024-11-08 17:08:54.704230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.267 [2024-11-08 17:08:54.704341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:18.267 [2024-11-08 17:08:54.704466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:18.267 [2024-11-08 17:08:54.704505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:18.267 pt3 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.267 [2024-11-08 17:08:54.711444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:18.267 [2024-11-08 17:08:54.711485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.267 [2024-11-08 17:08:54.711500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:18.267 [2024-11-08 17:08:54.711508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.267 [2024-11-08 17:08:54.711892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.267 [2024-11-08 17:08:54.711904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:18.267 [2024-11-08 17:08:54.711962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:18.267 [2024-11-08 17:08:54.711977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:18.267 [2024-11-08 17:08:54.712102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:18.267 [2024-11-08 17:08:54.712111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:18.267 [2024-11-08 17:08:54.712342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:18.267 [2024-11-08 17:08:54.712462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:18.267 [2024-11-08 17:08:54.712472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:18.267 [2024-11-08 17:08:54.712587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.267 pt4 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.267 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.268 "name": "raid_bdev1", 00:21:18.268 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:18.268 "strip_size_kb": 64, 00:21:18.268 "state": "online", 00:21:18.268 "raid_level": "raid0", 00:21:18.268 "superblock": true, 00:21:18.268 "num_base_bdevs": 4, 00:21:18.268 "num_base_bdevs_discovered": 4, 00:21:18.268 "num_base_bdevs_operational": 4, 00:21:18.268 "base_bdevs_list": [ 00:21:18.268 { 00:21:18.268 "name": "pt1", 00:21:18.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.268 "is_configured": true, 00:21:18.268 "data_offset": 2048, 00:21:18.268 "data_size": 63488 00:21:18.268 }, 00:21:18.268 { 00:21:18.268 "name": "pt2", 00:21:18.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.268 "is_configured": true, 00:21:18.268 "data_offset": 2048, 00:21:18.268 "data_size": 63488 00:21:18.268 }, 00:21:18.268 { 00:21:18.268 "name": "pt3", 00:21:18.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.268 "is_configured": true, 00:21:18.268 "data_offset": 2048, 00:21:18.268 "data_size": 63488 00:21:18.268 }, 00:21:18.268 { 00:21:18.268 "name": "pt4", 00:21:18.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:18.268 "is_configured": true, 00:21:18.268 "data_offset": 2048, 00:21:18.268 "data_size": 63488 00:21:18.268 } 00:21:18.268 ] 00:21:18.268 }' 00:21:18.268 17:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.268 17:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.527 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:18.527 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:18.527 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:18.527 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:18.528 [2024-11-08 17:08:55.088007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.528 "name": "raid_bdev1", 00:21:18.528 "aliases": [ 00:21:18.528 "c2dd9b03-99c8-437b-9705-53ddef42c169" 00:21:18.528 ], 00:21:18.528 "product_name": "Raid Volume", 00:21:18.528 "block_size": 512, 00:21:18.528 "num_blocks": 253952, 00:21:18.528 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:18.528 "assigned_rate_limits": { 00:21:18.528 "rw_ios_per_sec": 0, 00:21:18.528 "rw_mbytes_per_sec": 0, 00:21:18.528 "r_mbytes_per_sec": 0, 00:21:18.528 "w_mbytes_per_sec": 0 00:21:18.528 }, 00:21:18.528 "claimed": false, 00:21:18.528 "zoned": false, 00:21:18.528 "supported_io_types": { 00:21:18.528 "read": true, 00:21:18.528 "write": true, 00:21:18.528 "unmap": true, 00:21:18.528 "flush": true, 00:21:18.528 "reset": true, 00:21:18.528 "nvme_admin": false, 00:21:18.528 "nvme_io": false, 00:21:18.528 "nvme_io_md": false, 00:21:18.528 "write_zeroes": true, 00:21:18.528 "zcopy": false, 00:21:18.528 "get_zone_info": false, 00:21:18.528 "zone_management": false, 00:21:18.528 "zone_append": false, 00:21:18.528 "compare": false, 00:21:18.528 "compare_and_write": false, 00:21:18.528 "abort": false, 00:21:18.528 "seek_hole": false, 00:21:18.528 "seek_data": false, 00:21:18.528 "copy": false, 00:21:18.528 "nvme_iov_md": false 00:21:18.528 }, 00:21:18.528 "memory_domains": [ 00:21:18.528 { 00:21:18.528 "dma_device_id": "system", 00:21:18.528 "dma_device_type": 1 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.528 "dma_device_type": 2 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "dma_device_id": "system", 00:21:18.528 "dma_device_type": 1 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.528 "dma_device_type": 2 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "dma_device_id": "system", 00:21:18.528 "dma_device_type": 1 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.528 "dma_device_type": 2 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "dma_device_id": "system", 00:21:18.528 "dma_device_type": 1 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.528 "dma_device_type": 2 00:21:18.528 } 00:21:18.528 ], 00:21:18.528 "driver_specific": { 00:21:18.528 "raid": { 00:21:18.528 "uuid": "c2dd9b03-99c8-437b-9705-53ddef42c169", 00:21:18.528 "strip_size_kb": 64, 00:21:18.528 "state": "online", 00:21:18.528 "raid_level": "raid0", 00:21:18.528 "superblock": true, 00:21:18.528 "num_base_bdevs": 4, 00:21:18.528 "num_base_bdevs_discovered": 4, 00:21:18.528 "num_base_bdevs_operational": 4, 00:21:18.528 "base_bdevs_list": [ 00:21:18.528 { 00:21:18.528 "name": "pt1", 00:21:18.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.528 "is_configured": true, 00:21:18.528 "data_offset": 2048, 00:21:18.528 "data_size": 63488 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "name": "pt2", 00:21:18.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.528 "is_configured": true, 00:21:18.528 "data_offset": 2048, 00:21:18.528 "data_size": 63488 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "name": "pt3", 00:21:18.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.528 "is_configured": true, 00:21:18.528 "data_offset": 2048, 00:21:18.528 "data_size": 63488 00:21:18.528 }, 00:21:18.528 { 00:21:18.528 "name": "pt4", 00:21:18.528 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:18.528 "is_configured": true, 00:21:18.528 "data_offset": 2048, 00:21:18.528 "data_size": 63488 00:21:18.528 } 00:21:18.528 ] 00:21:18.528 } 00:21:18.528 } 00:21:18.528 }' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:18.528 pt2 00:21:18.528 pt3 00:21:18.528 pt4' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.528 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.790 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.790 [2024-11-08 17:08:55.319999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c2dd9b03-99c8-437b-9705-53ddef42c169 '!=' c2dd9b03-99c8-437b-9705-53ddef42c169 ']' 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69319 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 69319 ']' 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 69319 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69319 00:21:18.791 killing process with pid 69319 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69319' 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 69319 00:21:18.791 [2024-11-08 17:08:55.373568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:18.791 17:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 69319 00:21:18.791 [2024-11-08 17:08:55.373713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.791 [2024-11-08 17:08:55.373825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.791 [2024-11-08 17:08:55.373838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:19.050 [2024-11-08 17:08:55.642432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:19.998 ************************************ 00:21:19.998 END TEST raid_superblock_test 00:21:19.998 ************************************ 00:21:19.999 17:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:19.999 00:21:19.999 real 0m4.169s 00:21:19.999 user 0m5.938s 00:21:19.999 sys 0m0.662s 00:21:19.999 17:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:19.999 17:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.999 17:08:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:21:19.999 17:08:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:19.999 17:08:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:19.999 17:08:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:19.999 ************************************ 00:21:19.999 START TEST raid_read_error_test 00:21:19.999 ************************************ 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.01aaPTljp7 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69567 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69567 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69567 ']' 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:19.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.999 17:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:19.999 [2024-11-08 17:08:56.536882] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:19.999 [2024-11-08 17:08:56.537214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69567 ] 00:21:19.999 [2024-11-08 17:08:56.701311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.257 [2024-11-08 17:08:56.823838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.515 [2024-11-08 17:08:56.975375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.515 [2024-11-08 17:08:56.975435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 BaseBdev1_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 true 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 [2024-11-08 17:08:57.575587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:21.079 [2024-11-08 17:08:57.575658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.079 [2024-11-08 17:08:57.575683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:21.079 [2024-11-08 17:08:57.575696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.079 [2024-11-08 17:08:57.578119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.079 [2024-11-08 17:08:57.578162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:21.079 BaseBdev1 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 BaseBdev2_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 true 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 [2024-11-08 17:08:57.622864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:21.079 [2024-11-08 17:08:57.622932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.079 [2024-11-08 17:08:57.622952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:21.079 [2024-11-08 17:08:57.622964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.079 [2024-11-08 17:08:57.625378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.079 BaseBdev2 00:21:21.079 [2024-11-08 17:08:57.625553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 BaseBdev3_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 true 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 [2024-11-08 17:08:57.681178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:21.079 [2024-11-08 17:08:57.681248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.079 [2024-11-08 17:08:57.681270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:21.079 [2024-11-08 17:08:57.681283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.079 [2024-11-08 17:08:57.683653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.079 [2024-11-08 17:08:57.683694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:21.079 BaseBdev3 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 BaseBdev4_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.079 true 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.079 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.080 [2024-11-08 17:08:57.732180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:21.080 [2024-11-08 17:08:57.732400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.080 [2024-11-08 17:08:57.732428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:21.080 [2024-11-08 17:08:57.732441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.080 [2024-11-08 17:08:57.734783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.080 [2024-11-08 17:08:57.734818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:21.080 BaseBdev4 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.080 [2024-11-08 17:08:57.740251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:21.080 [2024-11-08 17:08:57.742284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.080 [2024-11-08 17:08:57.742511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:21.080 [2024-11-08 17:08:57.742593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:21.080 [2024-11-08 17:08:57.742857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:21:21.080 [2024-11-08 17:08:57.742874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:21.080 [2024-11-08 17:08:57.743160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:21:21.080 [2024-11-08 17:08:57.743314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:21:21.080 [2024-11-08 17:08:57.743325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:21:21.080 [2024-11-08 17:08:57.743493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.080 "name": "raid_bdev1", 00:21:21.080 "uuid": "39ee214e-c68c-4d7a-af6b-2eecd7d4e4d6", 00:21:21.080 "strip_size_kb": 64, 00:21:21.080 "state": "online", 00:21:21.080 "raid_level": "raid0", 00:21:21.080 "superblock": true, 00:21:21.080 "num_base_bdevs": 4, 00:21:21.080 "num_base_bdevs_discovered": 4, 00:21:21.080 "num_base_bdevs_operational": 4, 00:21:21.080 "base_bdevs_list": [ 00:21:21.080 { 00:21:21.080 "name": "BaseBdev1", 00:21:21.080 "uuid": "13130ea3-be92-5876-b9f0-ffe83134afb8", 00:21:21.080 "is_configured": true, 00:21:21.080 "data_offset": 2048, 00:21:21.080 "data_size": 63488 00:21:21.080 }, 00:21:21.080 { 00:21:21.080 "name": "BaseBdev2", 00:21:21.080 "uuid": "75296a9c-0f99-5255-9d25-796229220051", 00:21:21.080 "is_configured": true, 00:21:21.080 "data_offset": 2048, 00:21:21.080 "data_size": 63488 00:21:21.080 }, 00:21:21.080 { 00:21:21.080 "name": "BaseBdev3", 00:21:21.080 "uuid": "fb06b35e-a15d-551e-b491-88b6982ecb63", 00:21:21.080 "is_configured": true, 00:21:21.080 "data_offset": 2048, 00:21:21.080 "data_size": 63488 00:21:21.080 }, 00:21:21.080 { 00:21:21.080 "name": "BaseBdev4", 00:21:21.080 "uuid": "75ee8a4c-8662-5eab-9904-c98306bc3a7f", 00:21:21.080 "is_configured": true, 00:21:21.080 "data_offset": 2048, 00:21:21.080 "data_size": 63488 00:21:21.080 } 00:21:21.080 ] 00:21:21.080 }' 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.080 17:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.646 17:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:21.646 17:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:21.646 [2024-11-08 17:08:58.217389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.579 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.579 "name": "raid_bdev1", 00:21:22.579 "uuid": "39ee214e-c68c-4d7a-af6b-2eecd7d4e4d6", 00:21:22.579 "strip_size_kb": 64, 00:21:22.579 "state": "online", 00:21:22.579 "raid_level": "raid0", 00:21:22.579 "superblock": true, 00:21:22.579 "num_base_bdevs": 4, 00:21:22.579 "num_base_bdevs_discovered": 4, 00:21:22.579 "num_base_bdevs_operational": 4, 00:21:22.579 "base_bdevs_list": [ 00:21:22.579 { 00:21:22.579 "name": "BaseBdev1", 00:21:22.579 "uuid": "13130ea3-be92-5876-b9f0-ffe83134afb8", 00:21:22.579 "is_configured": true, 00:21:22.579 "data_offset": 2048, 00:21:22.579 "data_size": 63488 00:21:22.579 }, 00:21:22.579 { 00:21:22.579 "name": "BaseBdev2", 00:21:22.579 "uuid": "75296a9c-0f99-5255-9d25-796229220051", 00:21:22.579 "is_configured": true, 00:21:22.579 "data_offset": 2048, 00:21:22.579 "data_size": 63488 00:21:22.579 }, 00:21:22.579 { 00:21:22.579 "name": "BaseBdev3", 00:21:22.579 "uuid": "fb06b35e-a15d-551e-b491-88b6982ecb63", 00:21:22.579 "is_configured": true, 00:21:22.579 "data_offset": 2048, 00:21:22.579 "data_size": 63488 00:21:22.580 }, 00:21:22.580 { 00:21:22.580 "name": "BaseBdev4", 00:21:22.580 "uuid": "75ee8a4c-8662-5eab-9904-c98306bc3a7f", 00:21:22.580 "is_configured": true, 00:21:22.580 "data_offset": 2048, 00:21:22.580 "data_size": 63488 00:21:22.580 } 00:21:22.580 ] 00:21:22.580 }' 00:21:22.580 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.580 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.838 [2024-11-08 17:08:59.487658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.838 [2024-11-08 17:08:59.487701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.838 { 00:21:22.838 "results": [ 00:21:22.838 { 00:21:22.838 "job": "raid_bdev1", 00:21:22.838 "core_mask": "0x1", 00:21:22.838 "workload": "randrw", 00:21:22.838 "percentage": 50, 00:21:22.838 "status": "finished", 00:21:22.838 "queue_depth": 1, 00:21:22.838 "io_size": 131072, 00:21:22.838 "runtime": 1.267885, 00:21:22.838 "iops": 13605.33486869866, 00:21:22.838 "mibps": 1700.6668585873324, 00:21:22.838 "io_failed": 1, 00:21:22.838 "io_timeout": 0, 00:21:22.838 "avg_latency_us": 101.54948431083149, 00:21:22.838 "min_latency_us": 33.28, 00:21:22.838 "max_latency_us": 1701.4153846153847 00:21:22.838 } 00:21:22.838 ], 00:21:22.838 "core_count": 1 00:21:22.838 } 00:21:22.838 [2024-11-08 17:08:59.490815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.838 [2024-11-08 17:08:59.490887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.838 [2024-11-08 17:08:59.490938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.838 [2024-11-08 17:08:59.490951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69567 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69567 ']' 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69567 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69567 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69567' 00:21:22.838 killing process with pid 69567 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69567 00:21:22.838 17:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69567 00:21:22.838 [2024-11-08 17:08:59.527651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:23.103 [2024-11-08 17:08:59.745266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.01aaPTljp7 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:24.038 ************************************ 00:21:24.038 END TEST raid_read_error_test 00:21:24.038 ************************************ 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:21:24.038 00:21:24.038 real 0m4.109s 00:21:24.038 user 0m4.930s 00:21:24.038 sys 0m0.511s 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:24.038 17:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.038 17:09:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:21:24.038 17:09:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:24.038 17:09:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:24.038 17:09:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:24.038 ************************************ 00:21:24.038 START TEST raid_write_error_test 00:21:24.038 ************************************ 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:24.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3UMQg9aQSE 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69708 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69708 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69708 ']' 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:24.038 17:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.038 [2024-11-08 17:09:00.722032] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:24.039 [2024-11-08 17:09:00.722174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69708 ] 00:21:24.296 [2024-11-08 17:09:00.884912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.296 [2024-11-08 17:09:01.008305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.554 [2024-11-08 17:09:01.162403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.554 [2024-11-08 17:09:01.162447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.156 BaseBdev1_malloc 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.156 true 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.156 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.156 [2024-11-08 17:09:01.635874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:25.157 [2024-11-08 17:09:01.635933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.157 [2024-11-08 17:09:01.635954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:25.157 [2024-11-08 17:09:01.635966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.157 [2024-11-08 17:09:01.638338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.157 [2024-11-08 17:09:01.638381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:25.157 BaseBdev1 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 BaseBdev2_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 true 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 [2024-11-08 17:09:01.687212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:25.157 [2024-11-08 17:09:01.687274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.157 [2024-11-08 17:09:01.687294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:25.157 [2024-11-08 17:09:01.687307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.157 [2024-11-08 17:09:01.689654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.157 [2024-11-08 17:09:01.689835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.157 BaseBdev2 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 BaseBdev3_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 true 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 [2024-11-08 17:09:01.753443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:25.157 [2024-11-08 17:09:01.753633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.157 [2024-11-08 17:09:01.753665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:25.157 [2024-11-08 17:09:01.753678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.157 [2024-11-08 17:09:01.756130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.157 [2024-11-08 17:09:01.756168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:25.157 BaseBdev3 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 BaseBdev4_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 true 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 [2024-11-08 17:09:01.802219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:25.157 [2024-11-08 17:09:01.802280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.157 [2024-11-08 17:09:01.802301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:25.157 [2024-11-08 17:09:01.802311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.157 [2024-11-08 17:09:01.804687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.157 [2024-11-08 17:09:01.804848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:25.157 BaseBdev4 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 [2024-11-08 17:09:01.814303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.157 [2024-11-08 17:09:01.816362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.157 [2024-11-08 17:09:01.816587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:25.157 [2024-11-08 17:09:01.816673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:25.157 [2024-11-08 17:09:01.816932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:21:25.157 [2024-11-08 17:09:01.816949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:25.157 [2024-11-08 17:09:01.817234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:21:25.157 [2024-11-08 17:09:01.817391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:21:25.157 [2024-11-08 17:09:01.817403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:21:25.157 [2024-11-08 17:09:01.817572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.157 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.416 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.416 "name": "raid_bdev1", 00:21:25.416 "uuid": "bb9d1c18-b4a8-44ac-a2f7-b837c46c1c7a", 00:21:25.416 "strip_size_kb": 64, 00:21:25.416 "state": "online", 00:21:25.416 "raid_level": "raid0", 00:21:25.416 "superblock": true, 00:21:25.416 "num_base_bdevs": 4, 00:21:25.416 "num_base_bdevs_discovered": 4, 00:21:25.416 "num_base_bdevs_operational": 4, 00:21:25.416 "base_bdevs_list": [ 00:21:25.416 { 00:21:25.416 "name": "BaseBdev1", 00:21:25.416 "uuid": "f8a14af6-67d6-5d80-9b3a-2e3cc80f6e1c", 00:21:25.416 "is_configured": true, 00:21:25.416 "data_offset": 2048, 00:21:25.416 "data_size": 63488 00:21:25.416 }, 00:21:25.416 { 00:21:25.416 "name": "BaseBdev2", 00:21:25.416 "uuid": "85b8480e-e18f-5d88-84d0-82fb6cfa9c17", 00:21:25.416 "is_configured": true, 00:21:25.416 "data_offset": 2048, 00:21:25.416 "data_size": 63488 00:21:25.416 }, 00:21:25.416 { 00:21:25.416 "name": "BaseBdev3", 00:21:25.416 "uuid": "1c872ef6-2621-5028-aa7a-787f4f2d5704", 00:21:25.416 "is_configured": true, 00:21:25.416 "data_offset": 2048, 00:21:25.416 "data_size": 63488 00:21:25.416 }, 00:21:25.416 { 00:21:25.416 "name": "BaseBdev4", 00:21:25.416 "uuid": "70bf64da-6e1a-5892-8fbd-d6c6512b3521", 00:21:25.416 "is_configured": true, 00:21:25.416 "data_offset": 2048, 00:21:25.416 "data_size": 63488 00:21:25.416 } 00:21:25.416 ] 00:21:25.416 }' 00:21:25.416 17:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.416 17:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.674 17:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:25.674 17:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:25.674 [2024-11-08 17:09:02.231397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.609 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.610 "name": "raid_bdev1", 00:21:26.610 "uuid": "bb9d1c18-b4a8-44ac-a2f7-b837c46c1c7a", 00:21:26.610 "strip_size_kb": 64, 00:21:26.610 "state": "online", 00:21:26.610 "raid_level": "raid0", 00:21:26.610 "superblock": true, 00:21:26.610 "num_base_bdevs": 4, 00:21:26.610 "num_base_bdevs_discovered": 4, 00:21:26.610 "num_base_bdevs_operational": 4, 00:21:26.610 "base_bdevs_list": [ 00:21:26.610 { 00:21:26.610 "name": "BaseBdev1", 00:21:26.610 "uuid": "f8a14af6-67d6-5d80-9b3a-2e3cc80f6e1c", 00:21:26.610 "is_configured": true, 00:21:26.610 "data_offset": 2048, 00:21:26.610 "data_size": 63488 00:21:26.610 }, 00:21:26.610 { 00:21:26.610 "name": "BaseBdev2", 00:21:26.610 "uuid": "85b8480e-e18f-5d88-84d0-82fb6cfa9c17", 00:21:26.610 "is_configured": true, 00:21:26.610 "data_offset": 2048, 00:21:26.610 "data_size": 63488 00:21:26.610 }, 00:21:26.610 { 00:21:26.610 "name": "BaseBdev3", 00:21:26.610 "uuid": "1c872ef6-2621-5028-aa7a-787f4f2d5704", 00:21:26.610 "is_configured": true, 00:21:26.610 "data_offset": 2048, 00:21:26.610 "data_size": 63488 00:21:26.610 }, 00:21:26.610 { 00:21:26.610 "name": "BaseBdev4", 00:21:26.610 "uuid": "70bf64da-6e1a-5892-8fbd-d6c6512b3521", 00:21:26.610 "is_configured": true, 00:21:26.610 "data_offset": 2048, 00:21:26.610 "data_size": 63488 00:21:26.610 } 00:21:26.610 ] 00:21:26.610 }' 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.610 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.895 [2024-11-08 17:09:03.498022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:26.895 [2024-11-08 17:09:03.498162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:26.895 [2024-11-08 17:09:03.501254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:26.895 [2024-11-08 17:09:03.501320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.895 [2024-11-08 17:09:03.501368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:26.895 [2024-11-08 17:09:03.501381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:21:26.895 { 00:21:26.895 "results": [ 00:21:26.895 { 00:21:26.895 "job": "raid_bdev1", 00:21:26.895 "core_mask": "0x1", 00:21:26.895 "workload": "randrw", 00:21:26.895 "percentage": 50, 00:21:26.895 "status": "finished", 00:21:26.895 "queue_depth": 1, 00:21:26.895 "io_size": 131072, 00:21:26.895 "runtime": 1.264501, 00:21:26.895 "iops": 13659.933839514559, 00:21:26.895 "mibps": 1707.4917299393198, 00:21:26.895 "io_failed": 1, 00:21:26.895 "io_timeout": 0, 00:21:26.895 "avg_latency_us": 101.03464272673024, 00:21:26.895 "min_latency_us": 33.673846153846156, 00:21:26.895 "max_latency_us": 1701.4153846153847 00:21:26.895 } 00:21:26.895 ], 00:21:26.895 "core_count": 1 00:21:26.895 } 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69708 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69708 ']' 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69708 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:26.895 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69708 00:21:26.896 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:26.896 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:26.896 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69708' 00:21:26.896 killing process with pid 69708 00:21:26.896 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69708 00:21:26.896 [2024-11-08 17:09:03.529543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:26.896 17:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69708 00:21:27.154 [2024-11-08 17:09:03.748935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3UMQg9aQSE 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:21:28.087 00:21:28.087 real 0m4.013s 00:21:28.087 user 0m4.683s 00:21:28.087 sys 0m0.490s 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:28.087 17:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.087 ************************************ 00:21:28.087 END TEST raid_write_error_test 00:21:28.087 ************************************ 00:21:28.087 17:09:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:28.087 17:09:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:21:28.087 17:09:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:28.087 17:09:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:28.087 17:09:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.087 ************************************ 00:21:28.087 START TEST raid_state_function_test 00:21:28.087 ************************************ 00:21:28.087 17:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:21:28.087 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:21:28.087 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:28.087 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:28.088 Process raid pid: 69846 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69846 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69846' 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69846 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69846 ']' 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:28.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.088 17:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.088 [2024-11-08 17:09:04.766733] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:28.088 [2024-11-08 17:09:04.766891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.346 [2024-11-08 17:09:04.929791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.346 [2024-11-08 17:09:05.051342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.604 [2024-11-08 17:09:05.203938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.604 [2024-11-08 17:09:05.203992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.169 [2024-11-08 17:09:05.670413] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.169 [2024-11-08 17:09:05.670474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.169 [2024-11-08 17:09:05.670485] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.169 [2024-11-08 17:09:05.670495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.169 [2024-11-08 17:09:05.670506] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:29.169 [2024-11-08 17:09:05.670515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:29.169 [2024-11-08 17:09:05.670521] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:29.169 [2024-11-08 17:09:05.670530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.169 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.170 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.170 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.170 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.170 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.170 "name": "Existed_Raid", 00:21:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.170 "strip_size_kb": 64, 00:21:29.170 "state": "configuring", 00:21:29.170 "raid_level": "concat", 00:21:29.170 "superblock": false, 00:21:29.170 "num_base_bdevs": 4, 00:21:29.170 "num_base_bdevs_discovered": 0, 00:21:29.170 "num_base_bdevs_operational": 4, 00:21:29.170 "base_bdevs_list": [ 00:21:29.170 { 00:21:29.170 "name": "BaseBdev1", 00:21:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.170 "is_configured": false, 00:21:29.170 "data_offset": 0, 00:21:29.170 "data_size": 0 00:21:29.170 }, 00:21:29.170 { 00:21:29.170 "name": "BaseBdev2", 00:21:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.170 "is_configured": false, 00:21:29.170 "data_offset": 0, 00:21:29.170 "data_size": 0 00:21:29.170 }, 00:21:29.170 { 00:21:29.170 "name": "BaseBdev3", 00:21:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.170 "is_configured": false, 00:21:29.170 "data_offset": 0, 00:21:29.170 "data_size": 0 00:21:29.170 }, 00:21:29.170 { 00:21:29.170 "name": "BaseBdev4", 00:21:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.170 "is_configured": false, 00:21:29.170 "data_offset": 0, 00:21:29.170 "data_size": 0 00:21:29.170 } 00:21:29.170 ] 00:21:29.170 }' 00:21:29.170 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.170 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:29.429 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.429 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 [2024-11-08 17:09:05.990461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:29.429 [2024-11-08 17:09:05.990513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:29.429 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.429 17:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:29.429 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.429 17:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 [2024-11-08 17:09:05.998461] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.429 [2024-11-08 17:09:05.998512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.429 [2024-11-08 17:09:05.998523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.429 [2024-11-08 17:09:05.998535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.429 [2024-11-08 17:09:05.998542] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:29.429 [2024-11-08 17:09:05.998552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:29.429 [2024-11-08 17:09:05.998559] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:29.429 [2024-11-08 17:09:05.998568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 [2024-11-08 17:09:06.039027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.429 BaseBdev1 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.429 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.429 [ 00:21:29.429 { 00:21:29.429 "name": "BaseBdev1", 00:21:29.429 "aliases": [ 00:21:29.429 "60bc889c-e968-468a-8004-0598a7a1f6af" 00:21:29.429 ], 00:21:29.429 "product_name": "Malloc disk", 00:21:29.429 "block_size": 512, 00:21:29.429 "num_blocks": 65536, 00:21:29.429 "uuid": "60bc889c-e968-468a-8004-0598a7a1f6af", 00:21:29.429 "assigned_rate_limits": { 00:21:29.429 "rw_ios_per_sec": 0, 00:21:29.429 "rw_mbytes_per_sec": 0, 00:21:29.429 "r_mbytes_per_sec": 0, 00:21:29.430 "w_mbytes_per_sec": 0 00:21:29.430 }, 00:21:29.430 "claimed": true, 00:21:29.430 "claim_type": "exclusive_write", 00:21:29.430 "zoned": false, 00:21:29.430 "supported_io_types": { 00:21:29.430 "read": true, 00:21:29.430 "write": true, 00:21:29.430 "unmap": true, 00:21:29.430 "flush": true, 00:21:29.430 "reset": true, 00:21:29.430 "nvme_admin": false, 00:21:29.430 "nvme_io": false, 00:21:29.430 "nvme_io_md": false, 00:21:29.430 "write_zeroes": true, 00:21:29.430 "zcopy": true, 00:21:29.430 "get_zone_info": false, 00:21:29.430 "zone_management": false, 00:21:29.430 "zone_append": false, 00:21:29.430 "compare": false, 00:21:29.430 "compare_and_write": false, 00:21:29.430 "abort": true, 00:21:29.430 "seek_hole": false, 00:21:29.430 "seek_data": false, 00:21:29.430 "copy": true, 00:21:29.430 "nvme_iov_md": false 00:21:29.430 }, 00:21:29.430 "memory_domains": [ 00:21:29.430 { 00:21:29.430 "dma_device_id": "system", 00:21:29.430 "dma_device_type": 1 00:21:29.430 }, 00:21:29.430 { 00:21:29.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.430 "dma_device_type": 2 00:21:29.430 } 00:21:29.430 ], 00:21:29.430 "driver_specific": {} 00:21:29.430 } 00:21:29.430 ] 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.430 "name": "Existed_Raid", 00:21:29.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.430 "strip_size_kb": 64, 00:21:29.430 "state": "configuring", 00:21:29.430 "raid_level": "concat", 00:21:29.430 "superblock": false, 00:21:29.430 "num_base_bdevs": 4, 00:21:29.430 "num_base_bdevs_discovered": 1, 00:21:29.430 "num_base_bdevs_operational": 4, 00:21:29.430 "base_bdevs_list": [ 00:21:29.430 { 00:21:29.430 "name": "BaseBdev1", 00:21:29.430 "uuid": "60bc889c-e968-468a-8004-0598a7a1f6af", 00:21:29.430 "is_configured": true, 00:21:29.430 "data_offset": 0, 00:21:29.430 "data_size": 65536 00:21:29.430 }, 00:21:29.430 { 00:21:29.430 "name": "BaseBdev2", 00:21:29.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.430 "is_configured": false, 00:21:29.430 "data_offset": 0, 00:21:29.430 "data_size": 0 00:21:29.430 }, 00:21:29.430 { 00:21:29.430 "name": "BaseBdev3", 00:21:29.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.430 "is_configured": false, 00:21:29.430 "data_offset": 0, 00:21:29.430 "data_size": 0 00:21:29.430 }, 00:21:29.430 { 00:21:29.430 "name": "BaseBdev4", 00:21:29.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.430 "is_configured": false, 00:21:29.430 "data_offset": 0, 00:21:29.430 "data_size": 0 00:21:29.430 } 00:21:29.430 ] 00:21:29.430 }' 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.430 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.996 [2024-11-08 17:09:06.423172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:29.996 [2024-11-08 17:09:06.423375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.996 [2024-11-08 17:09:06.431250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.996 [2024-11-08 17:09:06.433332] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.996 [2024-11-08 17:09:06.433470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.996 [2024-11-08 17:09:06.433535] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:29.996 [2024-11-08 17:09:06.433568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:29.996 [2024-11-08 17:09:06.433589] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:29.996 [2024-11-08 17:09:06.433609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.996 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.996 "name": "Existed_Raid", 00:21:29.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.996 "strip_size_kb": 64, 00:21:29.996 "state": "configuring", 00:21:29.996 "raid_level": "concat", 00:21:29.996 "superblock": false, 00:21:29.996 "num_base_bdevs": 4, 00:21:29.996 "num_base_bdevs_discovered": 1, 00:21:29.996 "num_base_bdevs_operational": 4, 00:21:29.996 "base_bdevs_list": [ 00:21:29.996 { 00:21:29.996 "name": "BaseBdev1", 00:21:29.996 "uuid": "60bc889c-e968-468a-8004-0598a7a1f6af", 00:21:29.996 "is_configured": true, 00:21:29.996 "data_offset": 0, 00:21:29.996 "data_size": 65536 00:21:29.996 }, 00:21:29.996 { 00:21:29.996 "name": "BaseBdev2", 00:21:29.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.996 "is_configured": false, 00:21:29.996 "data_offset": 0, 00:21:29.996 "data_size": 0 00:21:29.996 }, 00:21:29.996 { 00:21:29.996 "name": "BaseBdev3", 00:21:29.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.996 "is_configured": false, 00:21:29.997 "data_offset": 0, 00:21:29.997 "data_size": 0 00:21:29.997 }, 00:21:29.997 { 00:21:29.997 "name": "BaseBdev4", 00:21:29.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.997 "is_configured": false, 00:21:29.997 "data_offset": 0, 00:21:29.997 "data_size": 0 00:21:29.997 } 00:21:29.997 ] 00:21:29.997 }' 00:21:29.997 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.997 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.255 [2024-11-08 17:09:06.792356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:30.255 BaseBdev2 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.255 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.255 [ 00:21:30.255 { 00:21:30.255 "name": "BaseBdev2", 00:21:30.255 "aliases": [ 00:21:30.255 "e723dfaf-fe59-45b3-bc5d-42ea4d998e7e" 00:21:30.255 ], 00:21:30.255 "product_name": "Malloc disk", 00:21:30.255 "block_size": 512, 00:21:30.255 "num_blocks": 65536, 00:21:30.255 "uuid": "e723dfaf-fe59-45b3-bc5d-42ea4d998e7e", 00:21:30.255 "assigned_rate_limits": { 00:21:30.255 "rw_ios_per_sec": 0, 00:21:30.255 "rw_mbytes_per_sec": 0, 00:21:30.255 "r_mbytes_per_sec": 0, 00:21:30.255 "w_mbytes_per_sec": 0 00:21:30.255 }, 00:21:30.255 "claimed": true, 00:21:30.255 "claim_type": "exclusive_write", 00:21:30.255 "zoned": false, 00:21:30.255 "supported_io_types": { 00:21:30.255 "read": true, 00:21:30.255 "write": true, 00:21:30.255 "unmap": true, 00:21:30.255 "flush": true, 00:21:30.255 "reset": true, 00:21:30.255 "nvme_admin": false, 00:21:30.256 "nvme_io": false, 00:21:30.256 "nvme_io_md": false, 00:21:30.256 "write_zeroes": true, 00:21:30.256 "zcopy": true, 00:21:30.256 "get_zone_info": false, 00:21:30.256 "zone_management": false, 00:21:30.256 "zone_append": false, 00:21:30.256 "compare": false, 00:21:30.256 "compare_and_write": false, 00:21:30.256 "abort": true, 00:21:30.256 "seek_hole": false, 00:21:30.256 "seek_data": false, 00:21:30.256 "copy": true, 00:21:30.256 "nvme_iov_md": false 00:21:30.256 }, 00:21:30.256 "memory_domains": [ 00:21:30.256 { 00:21:30.256 "dma_device_id": "system", 00:21:30.256 "dma_device_type": 1 00:21:30.256 }, 00:21:30.256 { 00:21:30.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.256 "dma_device_type": 2 00:21:30.256 } 00:21:30.256 ], 00:21:30.256 "driver_specific": {} 00:21:30.256 } 00:21:30.256 ] 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.256 "name": "Existed_Raid", 00:21:30.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.256 "strip_size_kb": 64, 00:21:30.256 "state": "configuring", 00:21:30.256 "raid_level": "concat", 00:21:30.256 "superblock": false, 00:21:30.256 "num_base_bdevs": 4, 00:21:30.256 "num_base_bdevs_discovered": 2, 00:21:30.256 "num_base_bdevs_operational": 4, 00:21:30.256 "base_bdevs_list": [ 00:21:30.256 { 00:21:30.256 "name": "BaseBdev1", 00:21:30.256 "uuid": "60bc889c-e968-468a-8004-0598a7a1f6af", 00:21:30.256 "is_configured": true, 00:21:30.256 "data_offset": 0, 00:21:30.256 "data_size": 65536 00:21:30.256 }, 00:21:30.256 { 00:21:30.256 "name": "BaseBdev2", 00:21:30.256 "uuid": "e723dfaf-fe59-45b3-bc5d-42ea4d998e7e", 00:21:30.256 "is_configured": true, 00:21:30.256 "data_offset": 0, 00:21:30.256 "data_size": 65536 00:21:30.256 }, 00:21:30.256 { 00:21:30.256 "name": "BaseBdev3", 00:21:30.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.256 "is_configured": false, 00:21:30.256 "data_offset": 0, 00:21:30.256 "data_size": 0 00:21:30.256 }, 00:21:30.256 { 00:21:30.256 "name": "BaseBdev4", 00:21:30.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.256 "is_configured": false, 00:21:30.256 "data_offset": 0, 00:21:30.256 "data_size": 0 00:21:30.256 } 00:21:30.256 ] 00:21:30.256 }' 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.256 17:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.513 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:30.513 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.513 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.772 [2024-11-08 17:09:07.262749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.772 BaseBdev3 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.772 [ 00:21:30.772 { 00:21:30.772 "name": "BaseBdev3", 00:21:30.772 "aliases": [ 00:21:30.772 "8511e437-5ac8-48ab-a716-ba5be5826e7e" 00:21:30.772 ], 00:21:30.772 "product_name": "Malloc disk", 00:21:30.772 "block_size": 512, 00:21:30.772 "num_blocks": 65536, 00:21:30.772 "uuid": "8511e437-5ac8-48ab-a716-ba5be5826e7e", 00:21:30.772 "assigned_rate_limits": { 00:21:30.772 "rw_ios_per_sec": 0, 00:21:30.772 "rw_mbytes_per_sec": 0, 00:21:30.772 "r_mbytes_per_sec": 0, 00:21:30.772 "w_mbytes_per_sec": 0 00:21:30.772 }, 00:21:30.772 "claimed": true, 00:21:30.772 "claim_type": "exclusive_write", 00:21:30.772 "zoned": false, 00:21:30.772 "supported_io_types": { 00:21:30.772 "read": true, 00:21:30.772 "write": true, 00:21:30.772 "unmap": true, 00:21:30.772 "flush": true, 00:21:30.772 "reset": true, 00:21:30.772 "nvme_admin": false, 00:21:30.772 "nvme_io": false, 00:21:30.772 "nvme_io_md": false, 00:21:30.772 "write_zeroes": true, 00:21:30.772 "zcopy": true, 00:21:30.772 "get_zone_info": false, 00:21:30.772 "zone_management": false, 00:21:30.772 "zone_append": false, 00:21:30.772 "compare": false, 00:21:30.772 "compare_and_write": false, 00:21:30.772 "abort": true, 00:21:30.772 "seek_hole": false, 00:21:30.772 "seek_data": false, 00:21:30.772 "copy": true, 00:21:30.772 "nvme_iov_md": false 00:21:30.772 }, 00:21:30.772 "memory_domains": [ 00:21:30.772 { 00:21:30.772 "dma_device_id": "system", 00:21:30.772 "dma_device_type": 1 00:21:30.772 }, 00:21:30.772 { 00:21:30.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.772 "dma_device_type": 2 00:21:30.772 } 00:21:30.772 ], 00:21:30.772 "driver_specific": {} 00:21:30.772 } 00:21:30.772 ] 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.772 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.773 "name": "Existed_Raid", 00:21:30.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.773 "strip_size_kb": 64, 00:21:30.773 "state": "configuring", 00:21:30.773 "raid_level": "concat", 00:21:30.773 "superblock": false, 00:21:30.773 "num_base_bdevs": 4, 00:21:30.773 "num_base_bdevs_discovered": 3, 00:21:30.773 "num_base_bdevs_operational": 4, 00:21:30.773 "base_bdevs_list": [ 00:21:30.773 { 00:21:30.773 "name": "BaseBdev1", 00:21:30.773 "uuid": "60bc889c-e968-468a-8004-0598a7a1f6af", 00:21:30.773 "is_configured": true, 00:21:30.773 "data_offset": 0, 00:21:30.773 "data_size": 65536 00:21:30.773 }, 00:21:30.773 { 00:21:30.773 "name": "BaseBdev2", 00:21:30.773 "uuid": "e723dfaf-fe59-45b3-bc5d-42ea4d998e7e", 00:21:30.773 "is_configured": true, 00:21:30.773 "data_offset": 0, 00:21:30.773 "data_size": 65536 00:21:30.773 }, 00:21:30.773 { 00:21:30.773 "name": "BaseBdev3", 00:21:30.773 "uuid": "8511e437-5ac8-48ab-a716-ba5be5826e7e", 00:21:30.773 "is_configured": true, 00:21:30.773 "data_offset": 0, 00:21:30.773 "data_size": 65536 00:21:30.773 }, 00:21:30.773 { 00:21:30.773 "name": "BaseBdev4", 00:21:30.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.773 "is_configured": false, 00:21:30.773 "data_offset": 0, 00:21:30.773 "data_size": 0 00:21:30.773 } 00:21:30.773 ] 00:21:30.773 }' 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.773 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.031 [2024-11-08 17:09:07.643623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:31.031 [2024-11-08 17:09:07.643682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:31.031 [2024-11-08 17:09:07.643691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:31.031 [2024-11-08 17:09:07.644002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:31.031 [2024-11-08 17:09:07.644160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:31.031 [2024-11-08 17:09:07.644172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:31.031 [2024-11-08 17:09:07.644431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.031 BaseBdev4 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.031 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.031 [ 00:21:31.031 { 00:21:31.031 "name": "BaseBdev4", 00:21:31.031 "aliases": [ 00:21:31.031 "a31aa250-1fd5-434c-be0b-41b5ec69bcfd" 00:21:31.031 ], 00:21:31.031 "product_name": "Malloc disk", 00:21:31.031 "block_size": 512, 00:21:31.031 "num_blocks": 65536, 00:21:31.031 "uuid": "a31aa250-1fd5-434c-be0b-41b5ec69bcfd", 00:21:31.031 "assigned_rate_limits": { 00:21:31.031 "rw_ios_per_sec": 0, 00:21:31.031 "rw_mbytes_per_sec": 0, 00:21:31.031 "r_mbytes_per_sec": 0, 00:21:31.031 "w_mbytes_per_sec": 0 00:21:31.031 }, 00:21:31.031 "claimed": true, 00:21:31.031 "claim_type": "exclusive_write", 00:21:31.031 "zoned": false, 00:21:31.031 "supported_io_types": { 00:21:31.031 "read": true, 00:21:31.031 "write": true, 00:21:31.031 "unmap": true, 00:21:31.031 "flush": true, 00:21:31.031 "reset": true, 00:21:31.031 "nvme_admin": false, 00:21:31.031 "nvme_io": false, 00:21:31.031 "nvme_io_md": false, 00:21:31.031 "write_zeroes": true, 00:21:31.031 "zcopy": true, 00:21:31.031 "get_zone_info": false, 00:21:31.031 "zone_management": false, 00:21:31.031 "zone_append": false, 00:21:31.031 "compare": false, 00:21:31.031 "compare_and_write": false, 00:21:31.031 "abort": true, 00:21:31.032 "seek_hole": false, 00:21:31.032 "seek_data": false, 00:21:31.032 "copy": true, 00:21:31.032 "nvme_iov_md": false 00:21:31.032 }, 00:21:31.032 "memory_domains": [ 00:21:31.032 { 00:21:31.032 "dma_device_id": "system", 00:21:31.032 "dma_device_type": 1 00:21:31.032 }, 00:21:31.032 { 00:21:31.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.032 "dma_device_type": 2 00:21:31.032 } 00:21:31.032 ], 00:21:31.032 "driver_specific": {} 00:21:31.032 } 00:21:31.032 ] 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.032 "name": "Existed_Raid", 00:21:31.032 "uuid": "daae9c1e-7d97-4c81-b3f7-ad102482bcb8", 00:21:31.032 "strip_size_kb": 64, 00:21:31.032 "state": "online", 00:21:31.032 "raid_level": "concat", 00:21:31.032 "superblock": false, 00:21:31.032 "num_base_bdevs": 4, 00:21:31.032 "num_base_bdevs_discovered": 4, 00:21:31.032 "num_base_bdevs_operational": 4, 00:21:31.032 "base_bdevs_list": [ 00:21:31.032 { 00:21:31.032 "name": "BaseBdev1", 00:21:31.032 "uuid": "60bc889c-e968-468a-8004-0598a7a1f6af", 00:21:31.032 "is_configured": true, 00:21:31.032 "data_offset": 0, 00:21:31.032 "data_size": 65536 00:21:31.032 }, 00:21:31.032 { 00:21:31.032 "name": "BaseBdev2", 00:21:31.032 "uuid": "e723dfaf-fe59-45b3-bc5d-42ea4d998e7e", 00:21:31.032 "is_configured": true, 00:21:31.032 "data_offset": 0, 00:21:31.032 "data_size": 65536 00:21:31.032 }, 00:21:31.032 { 00:21:31.032 "name": "BaseBdev3", 00:21:31.032 "uuid": "8511e437-5ac8-48ab-a716-ba5be5826e7e", 00:21:31.032 "is_configured": true, 00:21:31.032 "data_offset": 0, 00:21:31.032 "data_size": 65536 00:21:31.032 }, 00:21:31.032 { 00:21:31.032 "name": "BaseBdev4", 00:21:31.032 "uuid": "a31aa250-1fd5-434c-be0b-41b5ec69bcfd", 00:21:31.032 "is_configured": true, 00:21:31.032 "data_offset": 0, 00:21:31.032 "data_size": 65536 00:21:31.032 } 00:21:31.032 ] 00:21:31.032 }' 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.032 17:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:31.599 [2024-11-08 17:09:08.012155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:31.599 "name": "Existed_Raid", 00:21:31.599 "aliases": [ 00:21:31.599 "daae9c1e-7d97-4c81-b3f7-ad102482bcb8" 00:21:31.599 ], 00:21:31.599 "product_name": "Raid Volume", 00:21:31.599 "block_size": 512, 00:21:31.599 "num_blocks": 262144, 00:21:31.599 "uuid": "daae9c1e-7d97-4c81-b3f7-ad102482bcb8", 00:21:31.599 "assigned_rate_limits": { 00:21:31.599 "rw_ios_per_sec": 0, 00:21:31.599 "rw_mbytes_per_sec": 0, 00:21:31.599 "r_mbytes_per_sec": 0, 00:21:31.599 "w_mbytes_per_sec": 0 00:21:31.599 }, 00:21:31.599 "claimed": false, 00:21:31.599 "zoned": false, 00:21:31.599 "supported_io_types": { 00:21:31.599 "read": true, 00:21:31.599 "write": true, 00:21:31.599 "unmap": true, 00:21:31.599 "flush": true, 00:21:31.599 "reset": true, 00:21:31.599 "nvme_admin": false, 00:21:31.599 "nvme_io": false, 00:21:31.599 "nvme_io_md": false, 00:21:31.599 "write_zeroes": true, 00:21:31.599 "zcopy": false, 00:21:31.599 "get_zone_info": false, 00:21:31.599 "zone_management": false, 00:21:31.599 "zone_append": false, 00:21:31.599 "compare": false, 00:21:31.599 "compare_and_write": false, 00:21:31.599 "abort": false, 00:21:31.599 "seek_hole": false, 00:21:31.599 "seek_data": false, 00:21:31.599 "copy": false, 00:21:31.599 "nvme_iov_md": false 00:21:31.599 }, 00:21:31.599 "memory_domains": [ 00:21:31.599 { 00:21:31.599 "dma_device_id": "system", 00:21:31.599 "dma_device_type": 1 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.599 "dma_device_type": 2 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "dma_device_id": "system", 00:21:31.599 "dma_device_type": 1 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.599 "dma_device_type": 2 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "dma_device_id": "system", 00:21:31.599 "dma_device_type": 1 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.599 "dma_device_type": 2 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "dma_device_id": "system", 00:21:31.599 "dma_device_type": 1 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.599 "dma_device_type": 2 00:21:31.599 } 00:21:31.599 ], 00:21:31.599 "driver_specific": { 00:21:31.599 "raid": { 00:21:31.599 "uuid": "daae9c1e-7d97-4c81-b3f7-ad102482bcb8", 00:21:31.599 "strip_size_kb": 64, 00:21:31.599 "state": "online", 00:21:31.599 "raid_level": "concat", 00:21:31.599 "superblock": false, 00:21:31.599 "num_base_bdevs": 4, 00:21:31.599 "num_base_bdevs_discovered": 4, 00:21:31.599 "num_base_bdevs_operational": 4, 00:21:31.599 "base_bdevs_list": [ 00:21:31.599 { 00:21:31.599 "name": "BaseBdev1", 00:21:31.599 "uuid": "60bc889c-e968-468a-8004-0598a7a1f6af", 00:21:31.599 "is_configured": true, 00:21:31.599 "data_offset": 0, 00:21:31.599 "data_size": 65536 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "name": "BaseBdev2", 00:21:31.599 "uuid": "e723dfaf-fe59-45b3-bc5d-42ea4d998e7e", 00:21:31.599 "is_configured": true, 00:21:31.599 "data_offset": 0, 00:21:31.599 "data_size": 65536 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "name": "BaseBdev3", 00:21:31.599 "uuid": "8511e437-5ac8-48ab-a716-ba5be5826e7e", 00:21:31.599 "is_configured": true, 00:21:31.599 "data_offset": 0, 00:21:31.599 "data_size": 65536 00:21:31.599 }, 00:21:31.599 { 00:21:31.599 "name": "BaseBdev4", 00:21:31.599 "uuid": "a31aa250-1fd5-434c-be0b-41b5ec69bcfd", 00:21:31.599 "is_configured": true, 00:21:31.599 "data_offset": 0, 00:21:31.599 "data_size": 65536 00:21:31.599 } 00:21:31.599 ] 00:21:31.599 } 00:21:31.599 } 00:21:31.599 }' 00:21:31.599 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:31.600 BaseBdev2 00:21:31.600 BaseBdev3 00:21:31.600 BaseBdev4' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.600 [2024-11-08 17:09:08.235903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:31.600 [2024-11-08 17:09:08.235937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.600 [2024-11-08 17:09:08.235993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.600 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.940 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.940 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.940 "name": "Existed_Raid", 00:21:31.940 "uuid": "daae9c1e-7d97-4c81-b3f7-ad102482bcb8", 00:21:31.940 "strip_size_kb": 64, 00:21:31.940 "state": "offline", 00:21:31.940 "raid_level": "concat", 00:21:31.940 "superblock": false, 00:21:31.940 "num_base_bdevs": 4, 00:21:31.940 "num_base_bdevs_discovered": 3, 00:21:31.940 "num_base_bdevs_operational": 3, 00:21:31.940 "base_bdevs_list": [ 00:21:31.940 { 00:21:31.940 "name": null, 00:21:31.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.940 "is_configured": false, 00:21:31.940 "data_offset": 0, 00:21:31.940 "data_size": 65536 00:21:31.940 }, 00:21:31.940 { 00:21:31.940 "name": "BaseBdev2", 00:21:31.940 "uuid": "e723dfaf-fe59-45b3-bc5d-42ea4d998e7e", 00:21:31.940 "is_configured": true, 00:21:31.940 "data_offset": 0, 00:21:31.940 "data_size": 65536 00:21:31.940 }, 00:21:31.940 { 00:21:31.940 "name": "BaseBdev3", 00:21:31.940 "uuid": "8511e437-5ac8-48ab-a716-ba5be5826e7e", 00:21:31.940 "is_configured": true, 00:21:31.940 "data_offset": 0, 00:21:31.940 "data_size": 65536 00:21:31.940 }, 00:21:31.940 { 00:21:31.940 "name": "BaseBdev4", 00:21:31.940 "uuid": "a31aa250-1fd5-434c-be0b-41b5ec69bcfd", 00:21:31.940 "is_configured": true, 00:21:31.940 "data_offset": 0, 00:21:31.940 "data_size": 65536 00:21:31.940 } 00:21:31.940 ] 00:21:31.940 }' 00:21:31.940 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.940 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.199 [2024-11-08 17:09:08.658721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.199 [2024-11-08 17:09:08.777901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.199 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.199 [2024-11-08 17:09:08.880782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:32.199 [2024-11-08 17:09:08.880839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.459 17:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.459 BaseBdev2 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.459 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.459 [ 00:21:32.459 { 00:21:32.459 "name": "BaseBdev2", 00:21:32.459 "aliases": [ 00:21:32.459 "b0fd3401-f904-47d7-b3f3-dc5914a4218d" 00:21:32.459 ], 00:21:32.459 "product_name": "Malloc disk", 00:21:32.459 "block_size": 512, 00:21:32.459 "num_blocks": 65536, 00:21:32.459 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:32.459 "assigned_rate_limits": { 00:21:32.459 "rw_ios_per_sec": 0, 00:21:32.459 "rw_mbytes_per_sec": 0, 00:21:32.459 "r_mbytes_per_sec": 0, 00:21:32.460 "w_mbytes_per_sec": 0 00:21:32.460 }, 00:21:32.460 "claimed": false, 00:21:32.460 "zoned": false, 00:21:32.460 "supported_io_types": { 00:21:32.460 "read": true, 00:21:32.460 "write": true, 00:21:32.460 "unmap": true, 00:21:32.460 "flush": true, 00:21:32.460 "reset": true, 00:21:32.460 "nvme_admin": false, 00:21:32.460 "nvme_io": false, 00:21:32.460 "nvme_io_md": false, 00:21:32.460 "write_zeroes": true, 00:21:32.460 "zcopy": true, 00:21:32.460 "get_zone_info": false, 00:21:32.460 "zone_management": false, 00:21:32.460 "zone_append": false, 00:21:32.460 "compare": false, 00:21:32.460 "compare_and_write": false, 00:21:32.460 "abort": true, 00:21:32.460 "seek_hole": false, 00:21:32.460 "seek_data": false, 00:21:32.460 "copy": true, 00:21:32.460 "nvme_iov_md": false 00:21:32.460 }, 00:21:32.460 "memory_domains": [ 00:21:32.460 { 00:21:32.460 "dma_device_id": "system", 00:21:32.460 "dma_device_type": 1 00:21:32.460 }, 00:21:32.460 { 00:21:32.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.460 "dma_device_type": 2 00:21:32.460 } 00:21:32.460 ], 00:21:32.460 "driver_specific": {} 00:21:32.460 } 00:21:32.460 ] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.460 BaseBdev3 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.460 [ 00:21:32.460 { 00:21:32.460 "name": "BaseBdev3", 00:21:32.460 "aliases": [ 00:21:32.460 "414dd2f1-dfe5-4149-9f58-da194c605a7a" 00:21:32.460 ], 00:21:32.460 "product_name": "Malloc disk", 00:21:32.460 "block_size": 512, 00:21:32.460 "num_blocks": 65536, 00:21:32.460 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:32.460 "assigned_rate_limits": { 00:21:32.460 "rw_ios_per_sec": 0, 00:21:32.460 "rw_mbytes_per_sec": 0, 00:21:32.460 "r_mbytes_per_sec": 0, 00:21:32.460 "w_mbytes_per_sec": 0 00:21:32.460 }, 00:21:32.460 "claimed": false, 00:21:32.460 "zoned": false, 00:21:32.460 "supported_io_types": { 00:21:32.460 "read": true, 00:21:32.460 "write": true, 00:21:32.460 "unmap": true, 00:21:32.460 "flush": true, 00:21:32.460 "reset": true, 00:21:32.460 "nvme_admin": false, 00:21:32.460 "nvme_io": false, 00:21:32.460 "nvme_io_md": false, 00:21:32.460 "write_zeroes": true, 00:21:32.460 "zcopy": true, 00:21:32.460 "get_zone_info": false, 00:21:32.460 "zone_management": false, 00:21:32.460 "zone_append": false, 00:21:32.460 "compare": false, 00:21:32.460 "compare_and_write": false, 00:21:32.460 "abort": true, 00:21:32.460 "seek_hole": false, 00:21:32.460 "seek_data": false, 00:21:32.460 "copy": true, 00:21:32.460 "nvme_iov_md": false 00:21:32.460 }, 00:21:32.460 "memory_domains": [ 00:21:32.460 { 00:21:32.460 "dma_device_id": "system", 00:21:32.460 "dma_device_type": 1 00:21:32.460 }, 00:21:32.460 { 00:21:32.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.460 "dma_device_type": 2 00:21:32.460 } 00:21:32.460 ], 00:21:32.460 "driver_specific": {} 00:21:32.460 } 00:21:32.460 ] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.460 BaseBdev4 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.460 [ 00:21:32.460 { 00:21:32.460 "name": "BaseBdev4", 00:21:32.460 "aliases": [ 00:21:32.460 "1444490d-390b-412a-bebd-3d8741e94b91" 00:21:32.460 ], 00:21:32.460 "product_name": "Malloc disk", 00:21:32.460 "block_size": 512, 00:21:32.460 "num_blocks": 65536, 00:21:32.460 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:32.460 "assigned_rate_limits": { 00:21:32.460 "rw_ios_per_sec": 0, 00:21:32.460 "rw_mbytes_per_sec": 0, 00:21:32.460 "r_mbytes_per_sec": 0, 00:21:32.460 "w_mbytes_per_sec": 0 00:21:32.460 }, 00:21:32.460 "claimed": false, 00:21:32.460 "zoned": false, 00:21:32.460 "supported_io_types": { 00:21:32.460 "read": true, 00:21:32.460 "write": true, 00:21:32.460 "unmap": true, 00:21:32.460 "flush": true, 00:21:32.460 "reset": true, 00:21:32.460 "nvme_admin": false, 00:21:32.460 "nvme_io": false, 00:21:32.460 "nvme_io_md": false, 00:21:32.460 "write_zeroes": true, 00:21:32.460 "zcopy": true, 00:21:32.460 "get_zone_info": false, 00:21:32.460 "zone_management": false, 00:21:32.460 "zone_append": false, 00:21:32.460 "compare": false, 00:21:32.460 "compare_and_write": false, 00:21:32.460 "abort": true, 00:21:32.460 "seek_hole": false, 00:21:32.460 "seek_data": false, 00:21:32.460 "copy": true, 00:21:32.460 "nvme_iov_md": false 00:21:32.460 }, 00:21:32.460 "memory_domains": [ 00:21:32.460 { 00:21:32.460 "dma_device_id": "system", 00:21:32.460 "dma_device_type": 1 00:21:32.460 }, 00:21:32.460 { 00:21:32.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.460 "dma_device_type": 2 00:21:32.460 } 00:21:32.460 ], 00:21:32.460 "driver_specific": {} 00:21:32.460 } 00:21:32.460 ] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.460 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.460 [2024-11-08 17:09:09.147430] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:32.460 [2024-11-08 17:09:09.147496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:32.460 [2024-11-08 17:09:09.147522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.460 [2024-11-08 17:09:09.149555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.461 [2024-11-08 17:09:09.149613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.461 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.720 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.720 "name": "Existed_Raid", 00:21:32.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.720 "strip_size_kb": 64, 00:21:32.720 "state": "configuring", 00:21:32.720 "raid_level": "concat", 00:21:32.720 "superblock": false, 00:21:32.720 "num_base_bdevs": 4, 00:21:32.720 "num_base_bdevs_discovered": 3, 00:21:32.720 "num_base_bdevs_operational": 4, 00:21:32.720 "base_bdevs_list": [ 00:21:32.720 { 00:21:32.720 "name": "BaseBdev1", 00:21:32.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.720 "is_configured": false, 00:21:32.720 "data_offset": 0, 00:21:32.720 "data_size": 0 00:21:32.720 }, 00:21:32.720 { 00:21:32.720 "name": "BaseBdev2", 00:21:32.720 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:32.720 "is_configured": true, 00:21:32.720 "data_offset": 0, 00:21:32.720 "data_size": 65536 00:21:32.720 }, 00:21:32.720 { 00:21:32.720 "name": "BaseBdev3", 00:21:32.720 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:32.720 "is_configured": true, 00:21:32.720 "data_offset": 0, 00:21:32.720 "data_size": 65536 00:21:32.720 }, 00:21:32.720 { 00:21:32.720 "name": "BaseBdev4", 00:21:32.720 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:32.720 "is_configured": true, 00:21:32.720 "data_offset": 0, 00:21:32.720 "data_size": 65536 00:21:32.720 } 00:21:32.720 ] 00:21:32.720 }' 00:21:32.720 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.720 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.979 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.980 [2024-11-08 17:09:09.499521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.980 "name": "Existed_Raid", 00:21:32.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.980 "strip_size_kb": 64, 00:21:32.980 "state": "configuring", 00:21:32.980 "raid_level": "concat", 00:21:32.980 "superblock": false, 00:21:32.980 "num_base_bdevs": 4, 00:21:32.980 "num_base_bdevs_discovered": 2, 00:21:32.980 "num_base_bdevs_operational": 4, 00:21:32.980 "base_bdevs_list": [ 00:21:32.980 { 00:21:32.980 "name": "BaseBdev1", 00:21:32.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.980 "is_configured": false, 00:21:32.980 "data_offset": 0, 00:21:32.980 "data_size": 0 00:21:32.980 }, 00:21:32.980 { 00:21:32.980 "name": null, 00:21:32.980 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:32.980 "is_configured": false, 00:21:32.980 "data_offset": 0, 00:21:32.980 "data_size": 65536 00:21:32.980 }, 00:21:32.980 { 00:21:32.980 "name": "BaseBdev3", 00:21:32.980 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:32.980 "is_configured": true, 00:21:32.980 "data_offset": 0, 00:21:32.980 "data_size": 65536 00:21:32.980 }, 00:21:32.980 { 00:21:32.980 "name": "BaseBdev4", 00:21:32.980 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:32.980 "is_configured": true, 00:21:32.980 "data_offset": 0, 00:21:32.980 "data_size": 65536 00:21:32.980 } 00:21:32.980 ] 00:21:32.980 }' 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.980 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.237 [2024-11-08 17:09:09.936560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.237 BaseBdev1 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.237 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.507 [ 00:21:33.507 { 00:21:33.507 "name": "BaseBdev1", 00:21:33.507 "aliases": [ 00:21:33.507 "1b7f264e-7653-42f0-ad80-e38f2f769e43" 00:21:33.507 ], 00:21:33.507 "product_name": "Malloc disk", 00:21:33.507 "block_size": 512, 00:21:33.507 "num_blocks": 65536, 00:21:33.507 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:33.507 "assigned_rate_limits": { 00:21:33.507 "rw_ios_per_sec": 0, 00:21:33.507 "rw_mbytes_per_sec": 0, 00:21:33.507 "r_mbytes_per_sec": 0, 00:21:33.507 "w_mbytes_per_sec": 0 00:21:33.507 }, 00:21:33.507 "claimed": true, 00:21:33.507 "claim_type": "exclusive_write", 00:21:33.507 "zoned": false, 00:21:33.507 "supported_io_types": { 00:21:33.507 "read": true, 00:21:33.507 "write": true, 00:21:33.507 "unmap": true, 00:21:33.507 "flush": true, 00:21:33.507 "reset": true, 00:21:33.507 "nvme_admin": false, 00:21:33.507 "nvme_io": false, 00:21:33.507 "nvme_io_md": false, 00:21:33.507 "write_zeroes": true, 00:21:33.507 "zcopy": true, 00:21:33.507 "get_zone_info": false, 00:21:33.507 "zone_management": false, 00:21:33.507 "zone_append": false, 00:21:33.507 "compare": false, 00:21:33.507 "compare_and_write": false, 00:21:33.507 "abort": true, 00:21:33.507 "seek_hole": false, 00:21:33.507 "seek_data": false, 00:21:33.507 "copy": true, 00:21:33.507 "nvme_iov_md": false 00:21:33.507 }, 00:21:33.507 "memory_domains": [ 00:21:33.507 { 00:21:33.507 "dma_device_id": "system", 00:21:33.507 "dma_device_type": 1 00:21:33.507 }, 00:21:33.507 { 00:21:33.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.507 "dma_device_type": 2 00:21:33.507 } 00:21:33.507 ], 00:21:33.507 "driver_specific": {} 00:21:33.507 } 00:21:33.507 ] 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.507 "name": "Existed_Raid", 00:21:33.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.507 "strip_size_kb": 64, 00:21:33.507 "state": "configuring", 00:21:33.507 "raid_level": "concat", 00:21:33.507 "superblock": false, 00:21:33.507 "num_base_bdevs": 4, 00:21:33.507 "num_base_bdevs_discovered": 3, 00:21:33.507 "num_base_bdevs_operational": 4, 00:21:33.507 "base_bdevs_list": [ 00:21:33.507 { 00:21:33.507 "name": "BaseBdev1", 00:21:33.507 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:33.507 "is_configured": true, 00:21:33.507 "data_offset": 0, 00:21:33.507 "data_size": 65536 00:21:33.507 }, 00:21:33.507 { 00:21:33.507 "name": null, 00:21:33.507 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:33.507 "is_configured": false, 00:21:33.507 "data_offset": 0, 00:21:33.507 "data_size": 65536 00:21:33.507 }, 00:21:33.507 { 00:21:33.507 "name": "BaseBdev3", 00:21:33.507 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:33.507 "is_configured": true, 00:21:33.507 "data_offset": 0, 00:21:33.507 "data_size": 65536 00:21:33.507 }, 00:21:33.507 { 00:21:33.507 "name": "BaseBdev4", 00:21:33.507 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:33.507 "is_configured": true, 00:21:33.507 "data_offset": 0, 00:21:33.507 "data_size": 65536 00:21:33.507 } 00:21:33.507 ] 00:21:33.507 }' 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.507 17:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.766 [2024-11-08 17:09:10.320746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.766 "name": "Existed_Raid", 00:21:33.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.766 "strip_size_kb": 64, 00:21:33.766 "state": "configuring", 00:21:33.766 "raid_level": "concat", 00:21:33.766 "superblock": false, 00:21:33.766 "num_base_bdevs": 4, 00:21:33.766 "num_base_bdevs_discovered": 2, 00:21:33.766 "num_base_bdevs_operational": 4, 00:21:33.766 "base_bdevs_list": [ 00:21:33.766 { 00:21:33.766 "name": "BaseBdev1", 00:21:33.766 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:33.766 "is_configured": true, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 }, 00:21:33.766 { 00:21:33.766 "name": null, 00:21:33.766 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:33.766 "is_configured": false, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 }, 00:21:33.766 { 00:21:33.766 "name": null, 00:21:33.766 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:33.766 "is_configured": false, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 }, 00:21:33.766 { 00:21:33.766 "name": "BaseBdev4", 00:21:33.766 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:33.766 "is_configured": true, 00:21:33.766 "data_offset": 0, 00:21:33.766 "data_size": 65536 00:21:33.766 } 00:21:33.766 ] 00:21:33.766 }' 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.766 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.024 [2024-11-08 17:09:10.688840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.024 "name": "Existed_Raid", 00:21:34.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.024 "strip_size_kb": 64, 00:21:34.024 "state": "configuring", 00:21:34.024 "raid_level": "concat", 00:21:34.024 "superblock": false, 00:21:34.024 "num_base_bdevs": 4, 00:21:34.024 "num_base_bdevs_discovered": 3, 00:21:34.024 "num_base_bdevs_operational": 4, 00:21:34.024 "base_bdevs_list": [ 00:21:34.024 { 00:21:34.024 "name": "BaseBdev1", 00:21:34.024 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:34.024 "is_configured": true, 00:21:34.024 "data_offset": 0, 00:21:34.024 "data_size": 65536 00:21:34.024 }, 00:21:34.024 { 00:21:34.024 "name": null, 00:21:34.024 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:34.024 "is_configured": false, 00:21:34.024 "data_offset": 0, 00:21:34.024 "data_size": 65536 00:21:34.024 }, 00:21:34.024 { 00:21:34.024 "name": "BaseBdev3", 00:21:34.024 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:34.024 "is_configured": true, 00:21:34.024 "data_offset": 0, 00:21:34.024 "data_size": 65536 00:21:34.024 }, 00:21:34.024 { 00:21:34.024 "name": "BaseBdev4", 00:21:34.024 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:34.024 "is_configured": true, 00:21:34.024 "data_offset": 0, 00:21:34.024 "data_size": 65536 00:21:34.024 } 00:21:34.024 ] 00:21:34.024 }' 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.024 17:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.590 [2024-11-08 17:09:11.060970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:34.590 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.591 "name": "Existed_Raid", 00:21:34.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.591 "strip_size_kb": 64, 00:21:34.591 "state": "configuring", 00:21:34.591 "raid_level": "concat", 00:21:34.591 "superblock": false, 00:21:34.591 "num_base_bdevs": 4, 00:21:34.591 "num_base_bdevs_discovered": 2, 00:21:34.591 "num_base_bdevs_operational": 4, 00:21:34.591 "base_bdevs_list": [ 00:21:34.591 { 00:21:34.591 "name": null, 00:21:34.591 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:34.591 "is_configured": false, 00:21:34.591 "data_offset": 0, 00:21:34.591 "data_size": 65536 00:21:34.591 }, 00:21:34.591 { 00:21:34.591 "name": null, 00:21:34.591 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:34.591 "is_configured": false, 00:21:34.591 "data_offset": 0, 00:21:34.591 "data_size": 65536 00:21:34.591 }, 00:21:34.591 { 00:21:34.591 "name": "BaseBdev3", 00:21:34.591 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:34.591 "is_configured": true, 00:21:34.591 "data_offset": 0, 00:21:34.591 "data_size": 65536 00:21:34.591 }, 00:21:34.591 { 00:21:34.591 "name": "BaseBdev4", 00:21:34.591 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:34.591 "is_configured": true, 00:21:34.591 "data_offset": 0, 00:21:34.591 "data_size": 65536 00:21:34.591 } 00:21:34.591 ] 00:21:34.591 }' 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.591 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.849 [2024-11-08 17:09:11.500306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.849 "name": "Existed_Raid", 00:21:34.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.849 "strip_size_kb": 64, 00:21:34.849 "state": "configuring", 00:21:34.849 "raid_level": "concat", 00:21:34.849 "superblock": false, 00:21:34.849 "num_base_bdevs": 4, 00:21:34.849 "num_base_bdevs_discovered": 3, 00:21:34.849 "num_base_bdevs_operational": 4, 00:21:34.849 "base_bdevs_list": [ 00:21:34.849 { 00:21:34.849 "name": null, 00:21:34.849 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:34.849 "is_configured": false, 00:21:34.849 "data_offset": 0, 00:21:34.849 "data_size": 65536 00:21:34.849 }, 00:21:34.849 { 00:21:34.849 "name": "BaseBdev2", 00:21:34.849 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:34.849 "is_configured": true, 00:21:34.849 "data_offset": 0, 00:21:34.849 "data_size": 65536 00:21:34.849 }, 00:21:34.849 { 00:21:34.849 "name": "BaseBdev3", 00:21:34.849 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:34.849 "is_configured": true, 00:21:34.849 "data_offset": 0, 00:21:34.849 "data_size": 65536 00:21:34.849 }, 00:21:34.849 { 00:21:34.849 "name": "BaseBdev4", 00:21:34.849 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:34.849 "is_configured": true, 00:21:34.849 "data_offset": 0, 00:21:34.849 "data_size": 65536 00:21:34.849 } 00:21:34.849 ] 00:21:34.849 }' 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.849 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b7f264e-7653-42f0-ad80-e38f2f769e43 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 [2024-11-08 17:09:11.917254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:35.416 [2024-11-08 17:09:11.917305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:35.416 [2024-11-08 17:09:11.917312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:35.416 [2024-11-08 17:09:11.917583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:35.416 [2024-11-08 17:09:11.917728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:35.416 [2024-11-08 17:09:11.917739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:35.416 [2024-11-08 17:09:11.917986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.416 NewBaseBdev 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 [ 00:21:35.416 { 00:21:35.416 "name": "NewBaseBdev", 00:21:35.416 "aliases": [ 00:21:35.416 "1b7f264e-7653-42f0-ad80-e38f2f769e43" 00:21:35.416 ], 00:21:35.416 "product_name": "Malloc disk", 00:21:35.416 "block_size": 512, 00:21:35.416 "num_blocks": 65536, 00:21:35.416 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:35.416 "assigned_rate_limits": { 00:21:35.416 "rw_ios_per_sec": 0, 00:21:35.416 "rw_mbytes_per_sec": 0, 00:21:35.416 "r_mbytes_per_sec": 0, 00:21:35.416 "w_mbytes_per_sec": 0 00:21:35.416 }, 00:21:35.416 "claimed": true, 00:21:35.416 "claim_type": "exclusive_write", 00:21:35.416 "zoned": false, 00:21:35.416 "supported_io_types": { 00:21:35.416 "read": true, 00:21:35.416 "write": true, 00:21:35.416 "unmap": true, 00:21:35.416 "flush": true, 00:21:35.416 "reset": true, 00:21:35.416 "nvme_admin": false, 00:21:35.416 "nvme_io": false, 00:21:35.416 "nvme_io_md": false, 00:21:35.416 "write_zeroes": true, 00:21:35.416 "zcopy": true, 00:21:35.416 "get_zone_info": false, 00:21:35.416 "zone_management": false, 00:21:35.416 "zone_append": false, 00:21:35.416 "compare": false, 00:21:35.416 "compare_and_write": false, 00:21:35.416 "abort": true, 00:21:35.416 "seek_hole": false, 00:21:35.416 "seek_data": false, 00:21:35.416 "copy": true, 00:21:35.416 "nvme_iov_md": false 00:21:35.416 }, 00:21:35.416 "memory_domains": [ 00:21:35.416 { 00:21:35.416 "dma_device_id": "system", 00:21:35.416 "dma_device_type": 1 00:21:35.416 }, 00:21:35.416 { 00:21:35.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.416 "dma_device_type": 2 00:21:35.416 } 00:21:35.416 ], 00:21:35.416 "driver_specific": {} 00:21:35.416 } 00:21:35.416 ] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.416 "name": "Existed_Raid", 00:21:35.416 "uuid": "3bf80a81-fb18-4ac0-8332-098860d46018", 00:21:35.416 "strip_size_kb": 64, 00:21:35.416 "state": "online", 00:21:35.416 "raid_level": "concat", 00:21:35.416 "superblock": false, 00:21:35.416 "num_base_bdevs": 4, 00:21:35.416 "num_base_bdevs_discovered": 4, 00:21:35.416 "num_base_bdevs_operational": 4, 00:21:35.416 "base_bdevs_list": [ 00:21:35.416 { 00:21:35.416 "name": "NewBaseBdev", 00:21:35.416 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:35.416 "is_configured": true, 00:21:35.416 "data_offset": 0, 00:21:35.416 "data_size": 65536 00:21:35.416 }, 00:21:35.416 { 00:21:35.416 "name": "BaseBdev2", 00:21:35.416 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:35.416 "is_configured": true, 00:21:35.416 "data_offset": 0, 00:21:35.416 "data_size": 65536 00:21:35.416 }, 00:21:35.416 { 00:21:35.416 "name": "BaseBdev3", 00:21:35.416 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:35.416 "is_configured": true, 00:21:35.416 "data_offset": 0, 00:21:35.416 "data_size": 65536 00:21:35.416 }, 00:21:35.416 { 00:21:35.416 "name": "BaseBdev4", 00:21:35.416 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:35.416 "is_configured": true, 00:21:35.416 "data_offset": 0, 00:21:35.416 "data_size": 65536 00:21:35.416 } 00:21:35.416 ] 00:21:35.416 }' 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.416 17:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.675 [2024-11-08 17:09:12.293843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.675 "name": "Existed_Raid", 00:21:35.675 "aliases": [ 00:21:35.675 "3bf80a81-fb18-4ac0-8332-098860d46018" 00:21:35.675 ], 00:21:35.675 "product_name": "Raid Volume", 00:21:35.675 "block_size": 512, 00:21:35.675 "num_blocks": 262144, 00:21:35.675 "uuid": "3bf80a81-fb18-4ac0-8332-098860d46018", 00:21:35.675 "assigned_rate_limits": { 00:21:35.675 "rw_ios_per_sec": 0, 00:21:35.675 "rw_mbytes_per_sec": 0, 00:21:35.675 "r_mbytes_per_sec": 0, 00:21:35.675 "w_mbytes_per_sec": 0 00:21:35.675 }, 00:21:35.675 "claimed": false, 00:21:35.675 "zoned": false, 00:21:35.675 "supported_io_types": { 00:21:35.675 "read": true, 00:21:35.675 "write": true, 00:21:35.675 "unmap": true, 00:21:35.675 "flush": true, 00:21:35.675 "reset": true, 00:21:35.675 "nvme_admin": false, 00:21:35.675 "nvme_io": false, 00:21:35.675 "nvme_io_md": false, 00:21:35.675 "write_zeroes": true, 00:21:35.675 "zcopy": false, 00:21:35.675 "get_zone_info": false, 00:21:35.675 "zone_management": false, 00:21:35.675 "zone_append": false, 00:21:35.675 "compare": false, 00:21:35.675 "compare_and_write": false, 00:21:35.675 "abort": false, 00:21:35.675 "seek_hole": false, 00:21:35.675 "seek_data": false, 00:21:35.675 "copy": false, 00:21:35.675 "nvme_iov_md": false 00:21:35.675 }, 00:21:35.675 "memory_domains": [ 00:21:35.675 { 00:21:35.675 "dma_device_id": "system", 00:21:35.675 "dma_device_type": 1 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.675 "dma_device_type": 2 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "dma_device_id": "system", 00:21:35.675 "dma_device_type": 1 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.675 "dma_device_type": 2 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "dma_device_id": "system", 00:21:35.675 "dma_device_type": 1 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.675 "dma_device_type": 2 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "dma_device_id": "system", 00:21:35.675 "dma_device_type": 1 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.675 "dma_device_type": 2 00:21:35.675 } 00:21:35.675 ], 00:21:35.675 "driver_specific": { 00:21:35.675 "raid": { 00:21:35.675 "uuid": "3bf80a81-fb18-4ac0-8332-098860d46018", 00:21:35.675 "strip_size_kb": 64, 00:21:35.675 "state": "online", 00:21:35.675 "raid_level": "concat", 00:21:35.675 "superblock": false, 00:21:35.675 "num_base_bdevs": 4, 00:21:35.675 "num_base_bdevs_discovered": 4, 00:21:35.675 "num_base_bdevs_operational": 4, 00:21:35.675 "base_bdevs_list": [ 00:21:35.675 { 00:21:35.675 "name": "NewBaseBdev", 00:21:35.675 "uuid": "1b7f264e-7653-42f0-ad80-e38f2f769e43", 00:21:35.675 "is_configured": true, 00:21:35.675 "data_offset": 0, 00:21:35.675 "data_size": 65536 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "name": "BaseBdev2", 00:21:35.675 "uuid": "b0fd3401-f904-47d7-b3f3-dc5914a4218d", 00:21:35.675 "is_configured": true, 00:21:35.675 "data_offset": 0, 00:21:35.675 "data_size": 65536 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "name": "BaseBdev3", 00:21:35.675 "uuid": "414dd2f1-dfe5-4149-9f58-da194c605a7a", 00:21:35.675 "is_configured": true, 00:21:35.675 "data_offset": 0, 00:21:35.675 "data_size": 65536 00:21:35.675 }, 00:21:35.675 { 00:21:35.675 "name": "BaseBdev4", 00:21:35.675 "uuid": "1444490d-390b-412a-bebd-3d8741e94b91", 00:21:35.675 "is_configured": true, 00:21:35.675 "data_offset": 0, 00:21:35.675 "data_size": 65536 00:21:35.675 } 00:21:35.675 ] 00:21:35.675 } 00:21:35.675 } 00:21:35.675 }' 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:35.675 BaseBdev2 00:21:35.675 BaseBdev3 00:21:35.675 BaseBdev4' 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:35.675 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.933 [2024-11-08 17:09:12.533480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.933 [2024-11-08 17:09:12.533522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.933 [2024-11-08 17:09:12.533606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.933 [2024-11-08 17:09:12.533700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.933 [2024-11-08 17:09:12.533711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69846 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69846 ']' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69846 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69846 00:21:35.933 killing process with pid 69846 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69846' 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69846 00:21:35.933 [2024-11-08 17:09:12.566522] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:35.933 17:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69846 00:21:36.190 [2024-11-08 17:09:12.827313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:37.123 00:21:37.123 real 0m8.894s 00:21:37.123 user 0m14.017s 00:21:37.123 sys 0m1.627s 00:21:37.123 ************************************ 00:21:37.123 END TEST raid_state_function_test 00:21:37.123 ************************************ 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.123 17:09:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:37.123 17:09:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:37.123 17:09:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:37.123 17:09:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.123 ************************************ 00:21:37.123 START TEST raid_state_function_test_sb 00:21:37.123 ************************************ 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:37.123 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70490 00:21:37.124 Process raid pid: 70490 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70490' 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70490 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70490 ']' 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:37.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:37.124 17:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.124 [2024-11-08 17:09:13.713308] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:37.124 [2024-11-08 17:09:13.713450] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.381 [2024-11-08 17:09:13.875166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.381 [2024-11-08 17:09:13.993529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.639 [2024-11-08 17:09:14.142871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.639 [2024-11-08 17:09:14.142926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.897 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:37.897 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:21:37.897 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:37.897 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.897 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.897 [2024-11-08 17:09:14.582854] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.897 [2024-11-08 17:09:14.582906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.897 [2024-11-08 17:09:14.582916] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.897 [2024-11-08 17:09:14.582926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.897 [2024-11-08 17:09:14.582933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.897 [2024-11-08 17:09:14.582942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.897 [2024-11-08 17:09:14.582949] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.897 [2024-11-08 17:09:14.582957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.897 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.897 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.898 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.157 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.157 "name": "Existed_Raid", 00:21:38.157 "uuid": "520b0dae-f90e-41a1-a5bd-90473ea01c13", 00:21:38.157 "strip_size_kb": 64, 00:21:38.157 "state": "configuring", 00:21:38.157 "raid_level": "concat", 00:21:38.157 "superblock": true, 00:21:38.157 "num_base_bdevs": 4, 00:21:38.157 "num_base_bdevs_discovered": 0, 00:21:38.157 "num_base_bdevs_operational": 4, 00:21:38.157 "base_bdevs_list": [ 00:21:38.157 { 00:21:38.157 "name": "BaseBdev1", 00:21:38.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.157 "is_configured": false, 00:21:38.157 "data_offset": 0, 00:21:38.157 "data_size": 0 00:21:38.157 }, 00:21:38.157 { 00:21:38.157 "name": "BaseBdev2", 00:21:38.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.157 "is_configured": false, 00:21:38.157 "data_offset": 0, 00:21:38.157 "data_size": 0 00:21:38.157 }, 00:21:38.157 { 00:21:38.157 "name": "BaseBdev3", 00:21:38.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.157 "is_configured": false, 00:21:38.157 "data_offset": 0, 00:21:38.157 "data_size": 0 00:21:38.157 }, 00:21:38.157 { 00:21:38.157 "name": "BaseBdev4", 00:21:38.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.157 "is_configured": false, 00:21:38.157 "data_offset": 0, 00:21:38.157 "data_size": 0 00:21:38.157 } 00:21:38.157 ] 00:21:38.157 }' 00:21:38.157 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.157 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.415 [2024-11-08 17:09:14.942884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.415 [2024-11-08 17:09:14.942953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.415 [2024-11-08 17:09:14.950900] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:38.415 [2024-11-08 17:09:14.950950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:38.415 [2024-11-08 17:09:14.950961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.415 [2024-11-08 17:09:14.950972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.415 [2024-11-08 17:09:14.950978] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.415 [2024-11-08 17:09:14.950988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.415 [2024-11-08 17:09:14.950995] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:38.415 [2024-11-08 17:09:14.951005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.415 [2024-11-08 17:09:14.986231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.415 BaseBdev1 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.415 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.416 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.416 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:38.416 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.416 17:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.416 [ 00:21:38.416 { 00:21:38.416 "name": "BaseBdev1", 00:21:38.416 "aliases": [ 00:21:38.416 "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e" 00:21:38.416 ], 00:21:38.416 "product_name": "Malloc disk", 00:21:38.416 "block_size": 512, 00:21:38.416 "num_blocks": 65536, 00:21:38.416 "uuid": "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e", 00:21:38.416 "assigned_rate_limits": { 00:21:38.416 "rw_ios_per_sec": 0, 00:21:38.416 "rw_mbytes_per_sec": 0, 00:21:38.416 "r_mbytes_per_sec": 0, 00:21:38.416 "w_mbytes_per_sec": 0 00:21:38.416 }, 00:21:38.416 "claimed": true, 00:21:38.416 "claim_type": "exclusive_write", 00:21:38.416 "zoned": false, 00:21:38.416 "supported_io_types": { 00:21:38.416 "read": true, 00:21:38.416 "write": true, 00:21:38.416 "unmap": true, 00:21:38.416 "flush": true, 00:21:38.416 "reset": true, 00:21:38.416 "nvme_admin": false, 00:21:38.416 "nvme_io": false, 00:21:38.416 "nvme_io_md": false, 00:21:38.416 "write_zeroes": true, 00:21:38.416 "zcopy": true, 00:21:38.416 "get_zone_info": false, 00:21:38.416 "zone_management": false, 00:21:38.416 "zone_append": false, 00:21:38.416 "compare": false, 00:21:38.416 "compare_and_write": false, 00:21:38.416 "abort": true, 00:21:38.416 "seek_hole": false, 00:21:38.416 "seek_data": false, 00:21:38.416 "copy": true, 00:21:38.416 "nvme_iov_md": false 00:21:38.416 }, 00:21:38.416 "memory_domains": [ 00:21:38.416 { 00:21:38.416 "dma_device_id": "system", 00:21:38.416 "dma_device_type": 1 00:21:38.416 }, 00:21:38.416 { 00:21:38.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.416 "dma_device_type": 2 00:21:38.416 } 00:21:38.416 ], 00:21:38.416 "driver_specific": {} 00:21:38.416 } 00:21:38.416 ] 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.416 "name": "Existed_Raid", 00:21:38.416 "uuid": "2218868d-3f6e-4ef2-bb65-08a5e1115d44", 00:21:38.416 "strip_size_kb": 64, 00:21:38.416 "state": "configuring", 00:21:38.416 "raid_level": "concat", 00:21:38.416 "superblock": true, 00:21:38.416 "num_base_bdevs": 4, 00:21:38.416 "num_base_bdevs_discovered": 1, 00:21:38.416 "num_base_bdevs_operational": 4, 00:21:38.416 "base_bdevs_list": [ 00:21:38.416 { 00:21:38.416 "name": "BaseBdev1", 00:21:38.416 "uuid": "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e", 00:21:38.416 "is_configured": true, 00:21:38.416 "data_offset": 2048, 00:21:38.416 "data_size": 63488 00:21:38.416 }, 00:21:38.416 { 00:21:38.416 "name": "BaseBdev2", 00:21:38.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.416 "is_configured": false, 00:21:38.416 "data_offset": 0, 00:21:38.416 "data_size": 0 00:21:38.416 }, 00:21:38.416 { 00:21:38.416 "name": "BaseBdev3", 00:21:38.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.416 "is_configured": false, 00:21:38.416 "data_offset": 0, 00:21:38.416 "data_size": 0 00:21:38.416 }, 00:21:38.416 { 00:21:38.416 "name": "BaseBdev4", 00:21:38.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.416 "is_configured": false, 00:21:38.416 "data_offset": 0, 00:21:38.416 "data_size": 0 00:21:38.416 } 00:21:38.416 ] 00:21:38.416 }' 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.416 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.700 [2024-11-08 17:09:15.362395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.700 [2024-11-08 17:09:15.362458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.700 [2024-11-08 17:09:15.370453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.700 [2024-11-08 17:09:15.372429] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.700 [2024-11-08 17:09:15.372477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.700 [2024-11-08 17:09:15.372488] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.700 [2024-11-08 17:09:15.372501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.700 [2024-11-08 17:09:15.372508] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:38.700 [2024-11-08 17:09:15.372517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.700 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.958 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.958 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.958 "name": "Existed_Raid", 00:21:38.958 "uuid": "9387b458-231d-4ca2-aad9-20e61e0e2ce3", 00:21:38.958 "strip_size_kb": 64, 00:21:38.958 "state": "configuring", 00:21:38.958 "raid_level": "concat", 00:21:38.958 "superblock": true, 00:21:38.958 "num_base_bdevs": 4, 00:21:38.958 "num_base_bdevs_discovered": 1, 00:21:38.958 "num_base_bdevs_operational": 4, 00:21:38.958 "base_bdevs_list": [ 00:21:38.958 { 00:21:38.958 "name": "BaseBdev1", 00:21:38.958 "uuid": "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e", 00:21:38.958 "is_configured": true, 00:21:38.958 "data_offset": 2048, 00:21:38.958 "data_size": 63488 00:21:38.958 }, 00:21:38.958 { 00:21:38.958 "name": "BaseBdev2", 00:21:38.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.958 "is_configured": false, 00:21:38.958 "data_offset": 0, 00:21:38.958 "data_size": 0 00:21:38.958 }, 00:21:38.958 { 00:21:38.958 "name": "BaseBdev3", 00:21:38.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.958 "is_configured": false, 00:21:38.958 "data_offset": 0, 00:21:38.958 "data_size": 0 00:21:38.958 }, 00:21:38.958 { 00:21:38.958 "name": "BaseBdev4", 00:21:38.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.958 "is_configured": false, 00:21:38.958 "data_offset": 0, 00:21:38.958 "data_size": 0 00:21:38.958 } 00:21:38.958 ] 00:21:38.958 }' 00:21:38.958 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.958 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.216 [2024-11-08 17:09:15.731694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.216 BaseBdev2 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.216 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.216 [ 00:21:39.216 { 00:21:39.216 "name": "BaseBdev2", 00:21:39.216 "aliases": [ 00:21:39.216 "25a70b77-34db-4c0d-98d7-c1b5c226c1ed" 00:21:39.216 ], 00:21:39.216 "product_name": "Malloc disk", 00:21:39.216 "block_size": 512, 00:21:39.216 "num_blocks": 65536, 00:21:39.216 "uuid": "25a70b77-34db-4c0d-98d7-c1b5c226c1ed", 00:21:39.216 "assigned_rate_limits": { 00:21:39.216 "rw_ios_per_sec": 0, 00:21:39.216 "rw_mbytes_per_sec": 0, 00:21:39.216 "r_mbytes_per_sec": 0, 00:21:39.216 "w_mbytes_per_sec": 0 00:21:39.216 }, 00:21:39.216 "claimed": true, 00:21:39.216 "claim_type": "exclusive_write", 00:21:39.216 "zoned": false, 00:21:39.216 "supported_io_types": { 00:21:39.216 "read": true, 00:21:39.216 "write": true, 00:21:39.216 "unmap": true, 00:21:39.216 "flush": true, 00:21:39.216 "reset": true, 00:21:39.216 "nvme_admin": false, 00:21:39.216 "nvme_io": false, 00:21:39.216 "nvme_io_md": false, 00:21:39.216 "write_zeroes": true, 00:21:39.217 "zcopy": true, 00:21:39.217 "get_zone_info": false, 00:21:39.217 "zone_management": false, 00:21:39.217 "zone_append": false, 00:21:39.217 "compare": false, 00:21:39.217 "compare_and_write": false, 00:21:39.217 "abort": true, 00:21:39.217 "seek_hole": false, 00:21:39.217 "seek_data": false, 00:21:39.217 "copy": true, 00:21:39.217 "nvme_iov_md": false 00:21:39.217 }, 00:21:39.217 "memory_domains": [ 00:21:39.217 { 00:21:39.217 "dma_device_id": "system", 00:21:39.217 "dma_device_type": 1 00:21:39.217 }, 00:21:39.217 { 00:21:39.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.217 "dma_device_type": 2 00:21:39.217 } 00:21:39.217 ], 00:21:39.217 "driver_specific": {} 00:21:39.217 } 00:21:39.217 ] 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.217 "name": "Existed_Raid", 00:21:39.217 "uuid": "9387b458-231d-4ca2-aad9-20e61e0e2ce3", 00:21:39.217 "strip_size_kb": 64, 00:21:39.217 "state": "configuring", 00:21:39.217 "raid_level": "concat", 00:21:39.217 "superblock": true, 00:21:39.217 "num_base_bdevs": 4, 00:21:39.217 "num_base_bdevs_discovered": 2, 00:21:39.217 "num_base_bdevs_operational": 4, 00:21:39.217 "base_bdevs_list": [ 00:21:39.217 { 00:21:39.217 "name": "BaseBdev1", 00:21:39.217 "uuid": "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e", 00:21:39.217 "is_configured": true, 00:21:39.217 "data_offset": 2048, 00:21:39.217 "data_size": 63488 00:21:39.217 }, 00:21:39.217 { 00:21:39.217 "name": "BaseBdev2", 00:21:39.217 "uuid": "25a70b77-34db-4c0d-98d7-c1b5c226c1ed", 00:21:39.217 "is_configured": true, 00:21:39.217 "data_offset": 2048, 00:21:39.217 "data_size": 63488 00:21:39.217 }, 00:21:39.217 { 00:21:39.217 "name": "BaseBdev3", 00:21:39.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.217 "is_configured": false, 00:21:39.217 "data_offset": 0, 00:21:39.217 "data_size": 0 00:21:39.217 }, 00:21:39.217 { 00:21:39.217 "name": "BaseBdev4", 00:21:39.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.217 "is_configured": false, 00:21:39.217 "data_offset": 0, 00:21:39.217 "data_size": 0 00:21:39.217 } 00:21:39.217 ] 00:21:39.217 }' 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.217 17:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.475 [2024-11-08 17:09:16.120349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.475 BaseBdev3 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.475 [ 00:21:39.475 { 00:21:39.475 "name": "BaseBdev3", 00:21:39.475 "aliases": [ 00:21:39.475 "99058554-56bc-49b3-905e-f8cb71636495" 00:21:39.475 ], 00:21:39.475 "product_name": "Malloc disk", 00:21:39.475 "block_size": 512, 00:21:39.475 "num_blocks": 65536, 00:21:39.475 "uuid": "99058554-56bc-49b3-905e-f8cb71636495", 00:21:39.475 "assigned_rate_limits": { 00:21:39.475 "rw_ios_per_sec": 0, 00:21:39.475 "rw_mbytes_per_sec": 0, 00:21:39.475 "r_mbytes_per_sec": 0, 00:21:39.475 "w_mbytes_per_sec": 0 00:21:39.475 }, 00:21:39.475 "claimed": true, 00:21:39.475 "claim_type": "exclusive_write", 00:21:39.475 "zoned": false, 00:21:39.475 "supported_io_types": { 00:21:39.475 "read": true, 00:21:39.475 "write": true, 00:21:39.475 "unmap": true, 00:21:39.475 "flush": true, 00:21:39.475 "reset": true, 00:21:39.475 "nvme_admin": false, 00:21:39.475 "nvme_io": false, 00:21:39.475 "nvme_io_md": false, 00:21:39.475 "write_zeroes": true, 00:21:39.475 "zcopy": true, 00:21:39.475 "get_zone_info": false, 00:21:39.475 "zone_management": false, 00:21:39.475 "zone_append": false, 00:21:39.475 "compare": false, 00:21:39.475 "compare_and_write": false, 00:21:39.475 "abort": true, 00:21:39.475 "seek_hole": false, 00:21:39.475 "seek_data": false, 00:21:39.475 "copy": true, 00:21:39.475 "nvme_iov_md": false 00:21:39.475 }, 00:21:39.475 "memory_domains": [ 00:21:39.475 { 00:21:39.475 "dma_device_id": "system", 00:21:39.475 "dma_device_type": 1 00:21:39.475 }, 00:21:39.475 { 00:21:39.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.475 "dma_device_type": 2 00:21:39.475 } 00:21:39.475 ], 00:21:39.475 "driver_specific": {} 00:21:39.475 } 00:21:39.475 ] 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.475 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.852 "name": "Existed_Raid", 00:21:39.852 "uuid": "9387b458-231d-4ca2-aad9-20e61e0e2ce3", 00:21:39.852 "strip_size_kb": 64, 00:21:39.852 "state": "configuring", 00:21:39.852 "raid_level": "concat", 00:21:39.852 "superblock": true, 00:21:39.852 "num_base_bdevs": 4, 00:21:39.852 "num_base_bdevs_discovered": 3, 00:21:39.852 "num_base_bdevs_operational": 4, 00:21:39.852 "base_bdevs_list": [ 00:21:39.852 { 00:21:39.852 "name": "BaseBdev1", 00:21:39.852 "uuid": "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e", 00:21:39.852 "is_configured": true, 00:21:39.852 "data_offset": 2048, 00:21:39.852 "data_size": 63488 00:21:39.852 }, 00:21:39.852 { 00:21:39.852 "name": "BaseBdev2", 00:21:39.852 "uuid": "25a70b77-34db-4c0d-98d7-c1b5c226c1ed", 00:21:39.852 "is_configured": true, 00:21:39.852 "data_offset": 2048, 00:21:39.852 "data_size": 63488 00:21:39.852 }, 00:21:39.852 { 00:21:39.852 "name": "BaseBdev3", 00:21:39.852 "uuid": "99058554-56bc-49b3-905e-f8cb71636495", 00:21:39.852 "is_configured": true, 00:21:39.852 "data_offset": 2048, 00:21:39.852 "data_size": 63488 00:21:39.852 }, 00:21:39.852 { 00:21:39.852 "name": "BaseBdev4", 00:21:39.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.852 "is_configured": false, 00:21:39.852 "data_offset": 0, 00:21:39.852 "data_size": 0 00:21:39.852 } 00:21:39.852 ] 00:21:39.852 }' 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.852 [2024-11-08 17:09:16.505401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.852 [2024-11-08 17:09:16.505675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:39.852 [2024-11-08 17:09:16.505690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:39.852 BaseBdev4 00:21:39.852 [2024-11-08 17:09:16.505996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:39.852 [2024-11-08 17:09:16.506138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:39.852 [2024-11-08 17:09:16.506151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:39.852 [2024-11-08 17:09:16.506281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.852 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.852 [ 00:21:39.852 { 00:21:39.852 "name": "BaseBdev4", 00:21:39.852 "aliases": [ 00:21:39.852 "30df6f6d-f687-4dc3-af69-eaa3435dfae2" 00:21:39.852 ], 00:21:39.853 "product_name": "Malloc disk", 00:21:39.853 "block_size": 512, 00:21:39.853 "num_blocks": 65536, 00:21:39.853 "uuid": "30df6f6d-f687-4dc3-af69-eaa3435dfae2", 00:21:39.853 "assigned_rate_limits": { 00:21:39.853 "rw_ios_per_sec": 0, 00:21:39.853 "rw_mbytes_per_sec": 0, 00:21:39.853 "r_mbytes_per_sec": 0, 00:21:39.853 "w_mbytes_per_sec": 0 00:21:39.853 }, 00:21:39.853 "claimed": true, 00:21:39.853 "claim_type": "exclusive_write", 00:21:39.853 "zoned": false, 00:21:39.853 "supported_io_types": { 00:21:39.853 "read": true, 00:21:39.853 "write": true, 00:21:39.853 "unmap": true, 00:21:39.853 "flush": true, 00:21:39.853 "reset": true, 00:21:39.853 "nvme_admin": false, 00:21:39.853 "nvme_io": false, 00:21:39.853 "nvme_io_md": false, 00:21:39.853 "write_zeroes": true, 00:21:39.853 "zcopy": true, 00:21:39.853 "get_zone_info": false, 00:21:39.853 "zone_management": false, 00:21:39.853 "zone_append": false, 00:21:39.853 "compare": false, 00:21:39.853 "compare_and_write": false, 00:21:39.853 "abort": true, 00:21:39.853 "seek_hole": false, 00:21:39.853 "seek_data": false, 00:21:39.853 "copy": true, 00:21:39.853 "nvme_iov_md": false 00:21:39.853 }, 00:21:39.853 "memory_domains": [ 00:21:39.853 { 00:21:39.853 "dma_device_id": "system", 00:21:39.853 "dma_device_type": 1 00:21:39.853 }, 00:21:39.853 { 00:21:39.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.853 "dma_device_type": 2 00:21:39.853 } 00:21:39.853 ], 00:21:39.853 "driver_specific": {} 00:21:39.853 } 00:21:39.853 ] 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.853 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.111 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.111 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.111 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.111 "name": "Existed_Raid", 00:21:40.111 "uuid": "9387b458-231d-4ca2-aad9-20e61e0e2ce3", 00:21:40.111 "strip_size_kb": 64, 00:21:40.111 "state": "online", 00:21:40.111 "raid_level": "concat", 00:21:40.111 "superblock": true, 00:21:40.111 "num_base_bdevs": 4, 00:21:40.111 "num_base_bdevs_discovered": 4, 00:21:40.111 "num_base_bdevs_operational": 4, 00:21:40.111 "base_bdevs_list": [ 00:21:40.111 { 00:21:40.111 "name": "BaseBdev1", 00:21:40.111 "uuid": "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e", 00:21:40.111 "is_configured": true, 00:21:40.111 "data_offset": 2048, 00:21:40.111 "data_size": 63488 00:21:40.111 }, 00:21:40.111 { 00:21:40.111 "name": "BaseBdev2", 00:21:40.111 "uuid": "25a70b77-34db-4c0d-98d7-c1b5c226c1ed", 00:21:40.111 "is_configured": true, 00:21:40.111 "data_offset": 2048, 00:21:40.111 "data_size": 63488 00:21:40.111 }, 00:21:40.111 { 00:21:40.111 "name": "BaseBdev3", 00:21:40.111 "uuid": "99058554-56bc-49b3-905e-f8cb71636495", 00:21:40.111 "is_configured": true, 00:21:40.111 "data_offset": 2048, 00:21:40.111 "data_size": 63488 00:21:40.111 }, 00:21:40.111 { 00:21:40.111 "name": "BaseBdev4", 00:21:40.111 "uuid": "30df6f6d-f687-4dc3-af69-eaa3435dfae2", 00:21:40.111 "is_configured": true, 00:21:40.111 "data_offset": 2048, 00:21:40.111 "data_size": 63488 00:21:40.111 } 00:21:40.111 ] 00:21:40.111 }' 00:21:40.111 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.111 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.370 [2024-11-08 17:09:16.837951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.370 "name": "Existed_Raid", 00:21:40.370 "aliases": [ 00:21:40.370 "9387b458-231d-4ca2-aad9-20e61e0e2ce3" 00:21:40.370 ], 00:21:40.370 "product_name": "Raid Volume", 00:21:40.370 "block_size": 512, 00:21:40.370 "num_blocks": 253952, 00:21:40.370 "uuid": "9387b458-231d-4ca2-aad9-20e61e0e2ce3", 00:21:40.370 "assigned_rate_limits": { 00:21:40.370 "rw_ios_per_sec": 0, 00:21:40.370 "rw_mbytes_per_sec": 0, 00:21:40.370 "r_mbytes_per_sec": 0, 00:21:40.370 "w_mbytes_per_sec": 0 00:21:40.370 }, 00:21:40.370 "claimed": false, 00:21:40.370 "zoned": false, 00:21:40.370 "supported_io_types": { 00:21:40.370 "read": true, 00:21:40.370 "write": true, 00:21:40.370 "unmap": true, 00:21:40.370 "flush": true, 00:21:40.370 "reset": true, 00:21:40.370 "nvme_admin": false, 00:21:40.370 "nvme_io": false, 00:21:40.370 "nvme_io_md": false, 00:21:40.370 "write_zeroes": true, 00:21:40.370 "zcopy": false, 00:21:40.370 "get_zone_info": false, 00:21:40.370 "zone_management": false, 00:21:40.370 "zone_append": false, 00:21:40.370 "compare": false, 00:21:40.370 "compare_and_write": false, 00:21:40.370 "abort": false, 00:21:40.370 "seek_hole": false, 00:21:40.370 "seek_data": false, 00:21:40.370 "copy": false, 00:21:40.370 "nvme_iov_md": false 00:21:40.370 }, 00:21:40.370 "memory_domains": [ 00:21:40.370 { 00:21:40.370 "dma_device_id": "system", 00:21:40.370 "dma_device_type": 1 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.370 "dma_device_type": 2 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "dma_device_id": "system", 00:21:40.370 "dma_device_type": 1 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.370 "dma_device_type": 2 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "dma_device_id": "system", 00:21:40.370 "dma_device_type": 1 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.370 "dma_device_type": 2 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "dma_device_id": "system", 00:21:40.370 "dma_device_type": 1 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.370 "dma_device_type": 2 00:21:40.370 } 00:21:40.370 ], 00:21:40.370 "driver_specific": { 00:21:40.370 "raid": { 00:21:40.370 "uuid": "9387b458-231d-4ca2-aad9-20e61e0e2ce3", 00:21:40.370 "strip_size_kb": 64, 00:21:40.370 "state": "online", 00:21:40.370 "raid_level": "concat", 00:21:40.370 "superblock": true, 00:21:40.370 "num_base_bdevs": 4, 00:21:40.370 "num_base_bdevs_discovered": 4, 00:21:40.370 "num_base_bdevs_operational": 4, 00:21:40.370 "base_bdevs_list": [ 00:21:40.370 { 00:21:40.370 "name": "BaseBdev1", 00:21:40.370 "uuid": "d3bc4984-c92c-4d42-8c7c-ee22eaac1f6e", 00:21:40.370 "is_configured": true, 00:21:40.370 "data_offset": 2048, 00:21:40.370 "data_size": 63488 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "name": "BaseBdev2", 00:21:40.370 "uuid": "25a70b77-34db-4c0d-98d7-c1b5c226c1ed", 00:21:40.370 "is_configured": true, 00:21:40.370 "data_offset": 2048, 00:21:40.370 "data_size": 63488 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "name": "BaseBdev3", 00:21:40.370 "uuid": "99058554-56bc-49b3-905e-f8cb71636495", 00:21:40.370 "is_configured": true, 00:21:40.370 "data_offset": 2048, 00:21:40.370 "data_size": 63488 00:21:40.370 }, 00:21:40.370 { 00:21:40.370 "name": "BaseBdev4", 00:21:40.370 "uuid": "30df6f6d-f687-4dc3-af69-eaa3435dfae2", 00:21:40.370 "is_configured": true, 00:21:40.370 "data_offset": 2048, 00:21:40.370 "data_size": 63488 00:21:40.370 } 00:21:40.370 ] 00:21:40.370 } 00:21:40.370 } 00:21:40.370 }' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:40.370 BaseBdev2 00:21:40.370 BaseBdev3 00:21:40.370 BaseBdev4' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.370 17:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.370 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.371 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.371 [2024-11-08 17:09:17.069666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:40.371 [2024-11-08 17:09:17.069699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.371 [2024-11-08 17:09:17.069769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.628 "name": "Existed_Raid", 00:21:40.628 "uuid": "9387b458-231d-4ca2-aad9-20e61e0e2ce3", 00:21:40.628 "strip_size_kb": 64, 00:21:40.628 "state": "offline", 00:21:40.628 "raid_level": "concat", 00:21:40.628 "superblock": true, 00:21:40.628 "num_base_bdevs": 4, 00:21:40.628 "num_base_bdevs_discovered": 3, 00:21:40.628 "num_base_bdevs_operational": 3, 00:21:40.628 "base_bdevs_list": [ 00:21:40.628 { 00:21:40.628 "name": null, 00:21:40.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.628 "is_configured": false, 00:21:40.628 "data_offset": 0, 00:21:40.628 "data_size": 63488 00:21:40.628 }, 00:21:40.628 { 00:21:40.628 "name": "BaseBdev2", 00:21:40.628 "uuid": "25a70b77-34db-4c0d-98d7-c1b5c226c1ed", 00:21:40.628 "is_configured": true, 00:21:40.628 "data_offset": 2048, 00:21:40.628 "data_size": 63488 00:21:40.628 }, 00:21:40.628 { 00:21:40.628 "name": "BaseBdev3", 00:21:40.628 "uuid": "99058554-56bc-49b3-905e-f8cb71636495", 00:21:40.628 "is_configured": true, 00:21:40.628 "data_offset": 2048, 00:21:40.628 "data_size": 63488 00:21:40.628 }, 00:21:40.628 { 00:21:40.628 "name": "BaseBdev4", 00:21:40.628 "uuid": "30df6f6d-f687-4dc3-af69-eaa3435dfae2", 00:21:40.628 "is_configured": true, 00:21:40.628 "data_offset": 2048, 00:21:40.628 "data_size": 63488 00:21:40.628 } 00:21:40.628 ] 00:21:40.628 }' 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.628 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.886 [2024-11-08 17:09:17.503699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.886 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.887 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.887 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.145 [2024-11-08 17:09:17.605513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.145 [2024-11-08 17:09:17.710882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:41.145 [2024-11-08 17:09:17.710934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.145 BaseBdev2 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.145 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.404 [ 00:21:41.404 { 00:21:41.404 "name": "BaseBdev2", 00:21:41.404 "aliases": [ 00:21:41.404 "743515c7-bc55-408c-96c5-75939990c6cd" 00:21:41.404 ], 00:21:41.404 "product_name": "Malloc disk", 00:21:41.404 "block_size": 512, 00:21:41.404 "num_blocks": 65536, 00:21:41.404 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:41.404 "assigned_rate_limits": { 00:21:41.404 "rw_ios_per_sec": 0, 00:21:41.404 "rw_mbytes_per_sec": 0, 00:21:41.404 "r_mbytes_per_sec": 0, 00:21:41.404 "w_mbytes_per_sec": 0 00:21:41.404 }, 00:21:41.404 "claimed": false, 00:21:41.404 "zoned": false, 00:21:41.404 "supported_io_types": { 00:21:41.404 "read": true, 00:21:41.404 "write": true, 00:21:41.404 "unmap": true, 00:21:41.404 "flush": true, 00:21:41.404 "reset": true, 00:21:41.404 "nvme_admin": false, 00:21:41.404 "nvme_io": false, 00:21:41.404 "nvme_io_md": false, 00:21:41.404 "write_zeroes": true, 00:21:41.404 "zcopy": true, 00:21:41.404 "get_zone_info": false, 00:21:41.404 "zone_management": false, 00:21:41.404 "zone_append": false, 00:21:41.404 "compare": false, 00:21:41.404 "compare_and_write": false, 00:21:41.404 "abort": true, 00:21:41.404 "seek_hole": false, 00:21:41.404 "seek_data": false, 00:21:41.404 "copy": true, 00:21:41.404 "nvme_iov_md": false 00:21:41.404 }, 00:21:41.404 "memory_domains": [ 00:21:41.404 { 00:21:41.404 "dma_device_id": "system", 00:21:41.404 "dma_device_type": 1 00:21:41.404 }, 00:21:41.404 { 00:21:41.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.404 "dma_device_type": 2 00:21:41.404 } 00:21:41.404 ], 00:21:41.404 "driver_specific": {} 00:21:41.404 } 00:21:41.404 ] 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.404 BaseBdev3 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.404 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.404 [ 00:21:41.404 { 00:21:41.404 "name": "BaseBdev3", 00:21:41.404 "aliases": [ 00:21:41.405 "872a5956-55b3-4eb9-905b-84ced858e734" 00:21:41.405 ], 00:21:41.405 "product_name": "Malloc disk", 00:21:41.405 "block_size": 512, 00:21:41.405 "num_blocks": 65536, 00:21:41.405 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:41.405 "assigned_rate_limits": { 00:21:41.405 "rw_ios_per_sec": 0, 00:21:41.405 "rw_mbytes_per_sec": 0, 00:21:41.405 "r_mbytes_per_sec": 0, 00:21:41.405 "w_mbytes_per_sec": 0 00:21:41.405 }, 00:21:41.405 "claimed": false, 00:21:41.405 "zoned": false, 00:21:41.405 "supported_io_types": { 00:21:41.405 "read": true, 00:21:41.405 "write": true, 00:21:41.405 "unmap": true, 00:21:41.405 "flush": true, 00:21:41.405 "reset": true, 00:21:41.405 "nvme_admin": false, 00:21:41.405 "nvme_io": false, 00:21:41.405 "nvme_io_md": false, 00:21:41.405 "write_zeroes": true, 00:21:41.405 "zcopy": true, 00:21:41.405 "get_zone_info": false, 00:21:41.405 "zone_management": false, 00:21:41.405 "zone_append": false, 00:21:41.405 "compare": false, 00:21:41.405 "compare_and_write": false, 00:21:41.405 "abort": true, 00:21:41.405 "seek_hole": false, 00:21:41.405 "seek_data": false, 00:21:41.405 "copy": true, 00:21:41.405 "nvme_iov_md": false 00:21:41.405 }, 00:21:41.405 "memory_domains": [ 00:21:41.405 { 00:21:41.405 "dma_device_id": "system", 00:21:41.405 "dma_device_type": 1 00:21:41.405 }, 00:21:41.405 { 00:21:41.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.405 "dma_device_type": 2 00:21:41.405 } 00:21:41.405 ], 00:21:41.405 "driver_specific": {} 00:21:41.405 } 00:21:41.405 ] 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.405 BaseBdev4 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.405 [ 00:21:41.405 { 00:21:41.405 "name": "BaseBdev4", 00:21:41.405 "aliases": [ 00:21:41.405 "ee5ac0fa-21b5-4132-9440-4d653b59e797" 00:21:41.405 ], 00:21:41.405 "product_name": "Malloc disk", 00:21:41.405 "block_size": 512, 00:21:41.405 "num_blocks": 65536, 00:21:41.405 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:41.405 "assigned_rate_limits": { 00:21:41.405 "rw_ios_per_sec": 0, 00:21:41.405 "rw_mbytes_per_sec": 0, 00:21:41.405 "r_mbytes_per_sec": 0, 00:21:41.405 "w_mbytes_per_sec": 0 00:21:41.405 }, 00:21:41.405 "claimed": false, 00:21:41.405 "zoned": false, 00:21:41.405 "supported_io_types": { 00:21:41.405 "read": true, 00:21:41.405 "write": true, 00:21:41.405 "unmap": true, 00:21:41.405 "flush": true, 00:21:41.405 "reset": true, 00:21:41.405 "nvme_admin": false, 00:21:41.405 "nvme_io": false, 00:21:41.405 "nvme_io_md": false, 00:21:41.405 "write_zeroes": true, 00:21:41.405 "zcopy": true, 00:21:41.405 "get_zone_info": false, 00:21:41.405 "zone_management": false, 00:21:41.405 "zone_append": false, 00:21:41.405 "compare": false, 00:21:41.405 "compare_and_write": false, 00:21:41.405 "abort": true, 00:21:41.405 "seek_hole": false, 00:21:41.405 "seek_data": false, 00:21:41.405 "copy": true, 00:21:41.405 "nvme_iov_md": false 00:21:41.405 }, 00:21:41.405 "memory_domains": [ 00:21:41.405 { 00:21:41.405 "dma_device_id": "system", 00:21:41.405 "dma_device_type": 1 00:21:41.405 }, 00:21:41.405 { 00:21:41.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.405 "dma_device_type": 2 00:21:41.405 } 00:21:41.405 ], 00:21:41.405 "driver_specific": {} 00:21:41.405 } 00:21:41.405 ] 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.405 17:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.405 [2024-11-08 17:09:18.004177] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:41.405 [2024-11-08 17:09:18.004336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:41.405 [2024-11-08 17:09:18.004414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.405 [2024-11-08 17:09:18.006469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.405 [2024-11-08 17:09:18.006608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.405 "name": "Existed_Raid", 00:21:41.405 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:41.405 "strip_size_kb": 64, 00:21:41.405 "state": "configuring", 00:21:41.405 "raid_level": "concat", 00:21:41.405 "superblock": true, 00:21:41.405 "num_base_bdevs": 4, 00:21:41.405 "num_base_bdevs_discovered": 3, 00:21:41.405 "num_base_bdevs_operational": 4, 00:21:41.405 "base_bdevs_list": [ 00:21:41.405 { 00:21:41.405 "name": "BaseBdev1", 00:21:41.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.405 "is_configured": false, 00:21:41.405 "data_offset": 0, 00:21:41.405 "data_size": 0 00:21:41.405 }, 00:21:41.405 { 00:21:41.405 "name": "BaseBdev2", 00:21:41.405 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:41.405 "is_configured": true, 00:21:41.405 "data_offset": 2048, 00:21:41.405 "data_size": 63488 00:21:41.405 }, 00:21:41.405 { 00:21:41.405 "name": "BaseBdev3", 00:21:41.405 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:41.405 "is_configured": true, 00:21:41.405 "data_offset": 2048, 00:21:41.405 "data_size": 63488 00:21:41.405 }, 00:21:41.405 { 00:21:41.405 "name": "BaseBdev4", 00:21:41.405 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:41.405 "is_configured": true, 00:21:41.405 "data_offset": 2048, 00:21:41.405 "data_size": 63488 00:21:41.405 } 00:21:41.405 ] 00:21:41.405 }' 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.405 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.664 [2024-11-08 17:09:18.324240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.664 "name": "Existed_Raid", 00:21:41.664 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:41.664 "strip_size_kb": 64, 00:21:41.664 "state": "configuring", 00:21:41.664 "raid_level": "concat", 00:21:41.664 "superblock": true, 00:21:41.664 "num_base_bdevs": 4, 00:21:41.664 "num_base_bdevs_discovered": 2, 00:21:41.664 "num_base_bdevs_operational": 4, 00:21:41.664 "base_bdevs_list": [ 00:21:41.664 { 00:21:41.664 "name": "BaseBdev1", 00:21:41.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.664 "is_configured": false, 00:21:41.664 "data_offset": 0, 00:21:41.664 "data_size": 0 00:21:41.664 }, 00:21:41.664 { 00:21:41.664 "name": null, 00:21:41.664 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:41.664 "is_configured": false, 00:21:41.664 "data_offset": 0, 00:21:41.664 "data_size": 63488 00:21:41.664 }, 00:21:41.664 { 00:21:41.664 "name": "BaseBdev3", 00:21:41.664 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:41.664 "is_configured": true, 00:21:41.664 "data_offset": 2048, 00:21:41.664 "data_size": 63488 00:21:41.664 }, 00:21:41.664 { 00:21:41.664 "name": "BaseBdev4", 00:21:41.664 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:41.664 "is_configured": true, 00:21:41.664 "data_offset": 2048, 00:21:41.664 "data_size": 63488 00:21:41.664 } 00:21:41.664 ] 00:21:41.664 }' 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.664 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.228 [2024-11-08 17:09:18.729636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.228 BaseBdev1 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:42.228 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.229 [ 00:21:42.229 { 00:21:42.229 "name": "BaseBdev1", 00:21:42.229 "aliases": [ 00:21:42.229 "97da12f9-012d-4d3b-ad10-e82210650f67" 00:21:42.229 ], 00:21:42.229 "product_name": "Malloc disk", 00:21:42.229 "block_size": 512, 00:21:42.229 "num_blocks": 65536, 00:21:42.229 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:42.229 "assigned_rate_limits": { 00:21:42.229 "rw_ios_per_sec": 0, 00:21:42.229 "rw_mbytes_per_sec": 0, 00:21:42.229 "r_mbytes_per_sec": 0, 00:21:42.229 "w_mbytes_per_sec": 0 00:21:42.229 }, 00:21:42.229 "claimed": true, 00:21:42.229 "claim_type": "exclusive_write", 00:21:42.229 "zoned": false, 00:21:42.229 "supported_io_types": { 00:21:42.229 "read": true, 00:21:42.229 "write": true, 00:21:42.229 "unmap": true, 00:21:42.229 "flush": true, 00:21:42.229 "reset": true, 00:21:42.229 "nvme_admin": false, 00:21:42.229 "nvme_io": false, 00:21:42.229 "nvme_io_md": false, 00:21:42.229 "write_zeroes": true, 00:21:42.229 "zcopy": true, 00:21:42.229 "get_zone_info": false, 00:21:42.229 "zone_management": false, 00:21:42.229 "zone_append": false, 00:21:42.229 "compare": false, 00:21:42.229 "compare_and_write": false, 00:21:42.229 "abort": true, 00:21:42.229 "seek_hole": false, 00:21:42.229 "seek_data": false, 00:21:42.229 "copy": true, 00:21:42.229 "nvme_iov_md": false 00:21:42.229 }, 00:21:42.229 "memory_domains": [ 00:21:42.229 { 00:21:42.229 "dma_device_id": "system", 00:21:42.229 "dma_device_type": 1 00:21:42.229 }, 00:21:42.229 { 00:21:42.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.229 "dma_device_type": 2 00:21:42.229 } 00:21:42.229 ], 00:21:42.229 "driver_specific": {} 00:21:42.229 } 00:21:42.229 ] 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.229 "name": "Existed_Raid", 00:21:42.229 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:42.229 "strip_size_kb": 64, 00:21:42.229 "state": "configuring", 00:21:42.229 "raid_level": "concat", 00:21:42.229 "superblock": true, 00:21:42.229 "num_base_bdevs": 4, 00:21:42.229 "num_base_bdevs_discovered": 3, 00:21:42.229 "num_base_bdevs_operational": 4, 00:21:42.229 "base_bdevs_list": [ 00:21:42.229 { 00:21:42.229 "name": "BaseBdev1", 00:21:42.229 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:42.229 "is_configured": true, 00:21:42.229 "data_offset": 2048, 00:21:42.229 "data_size": 63488 00:21:42.229 }, 00:21:42.229 { 00:21:42.229 "name": null, 00:21:42.229 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:42.229 "is_configured": false, 00:21:42.229 "data_offset": 0, 00:21:42.229 "data_size": 63488 00:21:42.229 }, 00:21:42.229 { 00:21:42.229 "name": "BaseBdev3", 00:21:42.229 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:42.229 "is_configured": true, 00:21:42.229 "data_offset": 2048, 00:21:42.229 "data_size": 63488 00:21:42.229 }, 00:21:42.229 { 00:21:42.229 "name": "BaseBdev4", 00:21:42.229 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:42.229 "is_configured": true, 00:21:42.229 "data_offset": 2048, 00:21:42.229 "data_size": 63488 00:21:42.229 } 00:21:42.229 ] 00:21:42.229 }' 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.229 17:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 [2024-11-08 17:09:19.117816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.487 "name": "Existed_Raid", 00:21:42.487 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:42.487 "strip_size_kb": 64, 00:21:42.487 "state": "configuring", 00:21:42.487 "raid_level": "concat", 00:21:42.487 "superblock": true, 00:21:42.487 "num_base_bdevs": 4, 00:21:42.487 "num_base_bdevs_discovered": 2, 00:21:42.487 "num_base_bdevs_operational": 4, 00:21:42.487 "base_bdevs_list": [ 00:21:42.487 { 00:21:42.487 "name": "BaseBdev1", 00:21:42.487 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:42.487 "is_configured": true, 00:21:42.487 "data_offset": 2048, 00:21:42.487 "data_size": 63488 00:21:42.487 }, 00:21:42.487 { 00:21:42.487 "name": null, 00:21:42.487 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:42.487 "is_configured": false, 00:21:42.487 "data_offset": 0, 00:21:42.487 "data_size": 63488 00:21:42.487 }, 00:21:42.487 { 00:21:42.487 "name": null, 00:21:42.487 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:42.487 "is_configured": false, 00:21:42.487 "data_offset": 0, 00:21:42.487 "data_size": 63488 00:21:42.487 }, 00:21:42.487 { 00:21:42.487 "name": "BaseBdev4", 00:21:42.487 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:42.487 "is_configured": true, 00:21:42.487 "data_offset": 2048, 00:21:42.487 "data_size": 63488 00:21:42.487 } 00:21:42.487 ] 00:21:42.487 }' 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.487 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.744 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.744 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.744 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.744 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.002 [2024-11-08 17:09:19.489907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.002 "name": "Existed_Raid", 00:21:43.002 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:43.002 "strip_size_kb": 64, 00:21:43.002 "state": "configuring", 00:21:43.002 "raid_level": "concat", 00:21:43.002 "superblock": true, 00:21:43.002 "num_base_bdevs": 4, 00:21:43.002 "num_base_bdevs_discovered": 3, 00:21:43.002 "num_base_bdevs_operational": 4, 00:21:43.002 "base_bdevs_list": [ 00:21:43.002 { 00:21:43.002 "name": "BaseBdev1", 00:21:43.002 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:43.002 "is_configured": true, 00:21:43.002 "data_offset": 2048, 00:21:43.002 "data_size": 63488 00:21:43.002 }, 00:21:43.002 { 00:21:43.002 "name": null, 00:21:43.002 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:43.002 "is_configured": false, 00:21:43.002 "data_offset": 0, 00:21:43.002 "data_size": 63488 00:21:43.002 }, 00:21:43.002 { 00:21:43.002 "name": "BaseBdev3", 00:21:43.002 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:43.002 "is_configured": true, 00:21:43.002 "data_offset": 2048, 00:21:43.002 "data_size": 63488 00:21:43.002 }, 00:21:43.002 { 00:21:43.002 "name": "BaseBdev4", 00:21:43.002 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:43.002 "is_configured": true, 00:21:43.002 "data_offset": 2048, 00:21:43.002 "data_size": 63488 00:21:43.002 } 00:21:43.002 ] 00:21:43.002 }' 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.002 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.259 [2024-11-08 17:09:19.866060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.259 "name": "Existed_Raid", 00:21:43.259 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:43.259 "strip_size_kb": 64, 00:21:43.259 "state": "configuring", 00:21:43.259 "raid_level": "concat", 00:21:43.259 "superblock": true, 00:21:43.259 "num_base_bdevs": 4, 00:21:43.259 "num_base_bdevs_discovered": 2, 00:21:43.259 "num_base_bdevs_operational": 4, 00:21:43.259 "base_bdevs_list": [ 00:21:43.259 { 00:21:43.259 "name": null, 00:21:43.259 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:43.259 "is_configured": false, 00:21:43.259 "data_offset": 0, 00:21:43.259 "data_size": 63488 00:21:43.259 }, 00:21:43.259 { 00:21:43.259 "name": null, 00:21:43.259 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:43.259 "is_configured": false, 00:21:43.259 "data_offset": 0, 00:21:43.259 "data_size": 63488 00:21:43.259 }, 00:21:43.259 { 00:21:43.259 "name": "BaseBdev3", 00:21:43.259 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:43.259 "is_configured": true, 00:21:43.259 "data_offset": 2048, 00:21:43.259 "data_size": 63488 00:21:43.259 }, 00:21:43.259 { 00:21:43.259 "name": "BaseBdev4", 00:21:43.259 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:43.259 "is_configured": true, 00:21:43.259 "data_offset": 2048, 00:21:43.259 "data_size": 63488 00:21:43.259 } 00:21:43.259 ] 00:21:43.259 }' 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.259 17:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.823 [2024-11-08 17:09:20.283865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:43.823 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.824 "name": "Existed_Raid", 00:21:43.824 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:43.824 "strip_size_kb": 64, 00:21:43.824 "state": "configuring", 00:21:43.824 "raid_level": "concat", 00:21:43.824 "superblock": true, 00:21:43.824 "num_base_bdevs": 4, 00:21:43.824 "num_base_bdevs_discovered": 3, 00:21:43.824 "num_base_bdevs_operational": 4, 00:21:43.824 "base_bdevs_list": [ 00:21:43.824 { 00:21:43.824 "name": null, 00:21:43.824 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:43.824 "is_configured": false, 00:21:43.824 "data_offset": 0, 00:21:43.824 "data_size": 63488 00:21:43.824 }, 00:21:43.824 { 00:21:43.824 "name": "BaseBdev2", 00:21:43.824 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:43.824 "is_configured": true, 00:21:43.824 "data_offset": 2048, 00:21:43.824 "data_size": 63488 00:21:43.824 }, 00:21:43.824 { 00:21:43.824 "name": "BaseBdev3", 00:21:43.824 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:43.824 "is_configured": true, 00:21:43.824 "data_offset": 2048, 00:21:43.824 "data_size": 63488 00:21:43.824 }, 00:21:43.824 { 00:21:43.824 "name": "BaseBdev4", 00:21:43.824 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:43.824 "is_configured": true, 00:21:43.824 "data_offset": 2048, 00:21:43.824 "data_size": 63488 00:21:43.824 } 00:21:43.824 ] 00:21:43.824 }' 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.824 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 97da12f9-012d-4d3b-ad10-e82210650f67 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.082 [2024-11-08 17:09:20.732635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:44.082 [2024-11-08 17:09:20.733057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:44.082 [2024-11-08 17:09:20.733077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:44.082 [2024-11-08 17:09:20.733348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:44.082 [2024-11-08 17:09:20.733480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:44.082 [2024-11-08 17:09:20.733491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:44.082 [2024-11-08 17:09:20.733623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.082 NewBaseBdev 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:44.082 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.083 [ 00:21:44.083 { 00:21:44.083 "name": "NewBaseBdev", 00:21:44.083 "aliases": [ 00:21:44.083 "97da12f9-012d-4d3b-ad10-e82210650f67" 00:21:44.083 ], 00:21:44.083 "product_name": "Malloc disk", 00:21:44.083 "block_size": 512, 00:21:44.083 "num_blocks": 65536, 00:21:44.083 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:44.083 "assigned_rate_limits": { 00:21:44.083 "rw_ios_per_sec": 0, 00:21:44.083 "rw_mbytes_per_sec": 0, 00:21:44.083 "r_mbytes_per_sec": 0, 00:21:44.083 "w_mbytes_per_sec": 0 00:21:44.083 }, 00:21:44.083 "claimed": true, 00:21:44.083 "claim_type": "exclusive_write", 00:21:44.083 "zoned": false, 00:21:44.083 "supported_io_types": { 00:21:44.083 "read": true, 00:21:44.083 "write": true, 00:21:44.083 "unmap": true, 00:21:44.083 "flush": true, 00:21:44.083 "reset": true, 00:21:44.083 "nvme_admin": false, 00:21:44.083 "nvme_io": false, 00:21:44.083 "nvme_io_md": false, 00:21:44.083 "write_zeroes": true, 00:21:44.083 "zcopy": true, 00:21:44.083 "get_zone_info": false, 00:21:44.083 "zone_management": false, 00:21:44.083 "zone_append": false, 00:21:44.083 "compare": false, 00:21:44.083 "compare_and_write": false, 00:21:44.083 "abort": true, 00:21:44.083 "seek_hole": false, 00:21:44.083 "seek_data": false, 00:21:44.083 "copy": true, 00:21:44.083 "nvme_iov_md": false 00:21:44.083 }, 00:21:44.083 "memory_domains": [ 00:21:44.083 { 00:21:44.083 "dma_device_id": "system", 00:21:44.083 "dma_device_type": 1 00:21:44.083 }, 00:21:44.083 { 00:21:44.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.083 "dma_device_type": 2 00:21:44.083 } 00:21:44.083 ], 00:21:44.083 "driver_specific": {} 00:21:44.083 } 00:21:44.083 ] 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.083 "name": "Existed_Raid", 00:21:44.083 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:44.083 "strip_size_kb": 64, 00:21:44.083 "state": "online", 00:21:44.083 "raid_level": "concat", 00:21:44.083 "superblock": true, 00:21:44.083 "num_base_bdevs": 4, 00:21:44.083 "num_base_bdevs_discovered": 4, 00:21:44.083 "num_base_bdevs_operational": 4, 00:21:44.083 "base_bdevs_list": [ 00:21:44.083 { 00:21:44.083 "name": "NewBaseBdev", 00:21:44.083 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:44.083 "is_configured": true, 00:21:44.083 "data_offset": 2048, 00:21:44.083 "data_size": 63488 00:21:44.083 }, 00:21:44.083 { 00:21:44.083 "name": "BaseBdev2", 00:21:44.083 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:44.083 "is_configured": true, 00:21:44.083 "data_offset": 2048, 00:21:44.083 "data_size": 63488 00:21:44.083 }, 00:21:44.083 { 00:21:44.083 "name": "BaseBdev3", 00:21:44.083 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:44.083 "is_configured": true, 00:21:44.083 "data_offset": 2048, 00:21:44.083 "data_size": 63488 00:21:44.083 }, 00:21:44.083 { 00:21:44.083 "name": "BaseBdev4", 00:21:44.083 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:44.083 "is_configured": true, 00:21:44.083 "data_offset": 2048, 00:21:44.083 "data_size": 63488 00:21:44.083 } 00:21:44.083 ] 00:21:44.083 }' 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.083 17:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 [2024-11-08 17:09:21.081206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:44.651 "name": "Existed_Raid", 00:21:44.651 "aliases": [ 00:21:44.651 "5796f2ba-be4f-4673-9bc1-ab258bae48c4" 00:21:44.651 ], 00:21:44.651 "product_name": "Raid Volume", 00:21:44.651 "block_size": 512, 00:21:44.651 "num_blocks": 253952, 00:21:44.651 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:44.651 "assigned_rate_limits": { 00:21:44.651 "rw_ios_per_sec": 0, 00:21:44.651 "rw_mbytes_per_sec": 0, 00:21:44.651 "r_mbytes_per_sec": 0, 00:21:44.651 "w_mbytes_per_sec": 0 00:21:44.651 }, 00:21:44.651 "claimed": false, 00:21:44.651 "zoned": false, 00:21:44.651 "supported_io_types": { 00:21:44.651 "read": true, 00:21:44.651 "write": true, 00:21:44.651 "unmap": true, 00:21:44.651 "flush": true, 00:21:44.651 "reset": true, 00:21:44.651 "nvme_admin": false, 00:21:44.651 "nvme_io": false, 00:21:44.651 "nvme_io_md": false, 00:21:44.651 "write_zeroes": true, 00:21:44.651 "zcopy": false, 00:21:44.651 "get_zone_info": false, 00:21:44.651 "zone_management": false, 00:21:44.651 "zone_append": false, 00:21:44.651 "compare": false, 00:21:44.651 "compare_and_write": false, 00:21:44.651 "abort": false, 00:21:44.651 "seek_hole": false, 00:21:44.651 "seek_data": false, 00:21:44.651 "copy": false, 00:21:44.651 "nvme_iov_md": false 00:21:44.651 }, 00:21:44.651 "memory_domains": [ 00:21:44.651 { 00:21:44.651 "dma_device_id": "system", 00:21:44.651 "dma_device_type": 1 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.651 "dma_device_type": 2 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "dma_device_id": "system", 00:21:44.651 "dma_device_type": 1 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.651 "dma_device_type": 2 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "dma_device_id": "system", 00:21:44.651 "dma_device_type": 1 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.651 "dma_device_type": 2 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "dma_device_id": "system", 00:21:44.651 "dma_device_type": 1 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.651 "dma_device_type": 2 00:21:44.651 } 00:21:44.651 ], 00:21:44.651 "driver_specific": { 00:21:44.651 "raid": { 00:21:44.651 "uuid": "5796f2ba-be4f-4673-9bc1-ab258bae48c4", 00:21:44.651 "strip_size_kb": 64, 00:21:44.651 "state": "online", 00:21:44.651 "raid_level": "concat", 00:21:44.651 "superblock": true, 00:21:44.651 "num_base_bdevs": 4, 00:21:44.651 "num_base_bdevs_discovered": 4, 00:21:44.651 "num_base_bdevs_operational": 4, 00:21:44.651 "base_bdevs_list": [ 00:21:44.651 { 00:21:44.651 "name": "NewBaseBdev", 00:21:44.651 "uuid": "97da12f9-012d-4d3b-ad10-e82210650f67", 00:21:44.651 "is_configured": true, 00:21:44.651 "data_offset": 2048, 00:21:44.651 "data_size": 63488 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "name": "BaseBdev2", 00:21:44.651 "uuid": "743515c7-bc55-408c-96c5-75939990c6cd", 00:21:44.651 "is_configured": true, 00:21:44.651 "data_offset": 2048, 00:21:44.651 "data_size": 63488 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "name": "BaseBdev3", 00:21:44.651 "uuid": "872a5956-55b3-4eb9-905b-84ced858e734", 00:21:44.651 "is_configured": true, 00:21:44.651 "data_offset": 2048, 00:21:44.651 "data_size": 63488 00:21:44.651 }, 00:21:44.651 { 00:21:44.651 "name": "BaseBdev4", 00:21:44.651 "uuid": "ee5ac0fa-21b5-4132-9440-4d653b59e797", 00:21:44.651 "is_configured": true, 00:21:44.651 "data_offset": 2048, 00:21:44.651 "data_size": 63488 00:21:44.651 } 00:21:44.651 ] 00:21:44.651 } 00:21:44.651 } 00:21:44.651 }' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:44.651 BaseBdev2 00:21:44.651 BaseBdev3 00:21:44.651 BaseBdev4' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.651 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.652 [2024-11-08 17:09:21.304826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:44.652 [2024-11-08 17:09:21.304856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.652 [2024-11-08 17:09:21.304943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.652 [2024-11-08 17:09:21.305022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.652 [2024-11-08 17:09:21.305033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70490 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70490 ']' 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70490 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70490 00:21:44.652 killing process with pid 70490 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70490' 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70490 00:21:44.652 [2024-11-08 17:09:21.338420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.652 17:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70490 00:21:44.909 [2024-11-08 17:09:21.595221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.842 ************************************ 00:21:45.842 END TEST raid_state_function_test_sb 00:21:45.842 ************************************ 00:21:45.842 17:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:45.842 00:21:45.842 real 0m8.717s 00:21:45.842 user 0m13.756s 00:21:45.842 sys 0m1.468s 00:21:45.842 17:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:45.842 17:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.842 17:09:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:21:45.842 17:09:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:45.842 17:09:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:45.842 17:09:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:45.842 ************************************ 00:21:45.842 START TEST raid_superblock_test 00:21:45.842 ************************************ 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:45.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71133 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71133 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 71133 ']' 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:45.842 17:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.842 [2024-11-08 17:09:22.498143] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:45.842 [2024-11-08 17:09:22.498461] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71133 ] 00:21:46.100 [2024-11-08 17:09:22.657248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.100 [2024-11-08 17:09:22.776455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.359 [2024-11-08 17:09:22.924149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.359 [2024-11-08 17:09:22.924380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.925 malloc1 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.925 [2024-11-08 17:09:23.391027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:46.925 [2024-11-08 17:09:23.391216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.925 [2024-11-08 17:09:23.391250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:46.925 [2024-11-08 17:09:23.391262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.925 [2024-11-08 17:09:23.393644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.925 [2024-11-08 17:09:23.393678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:46.925 pt1 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.925 malloc2 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.925 [2024-11-08 17:09:23.433408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:46.925 [2024-11-08 17:09:23.433482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.925 [2024-11-08 17:09:23.433508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:46.925 [2024-11-08 17:09:23.433517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.925 [2024-11-08 17:09:23.435908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.925 [2024-11-08 17:09:23.435978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:46.925 pt2 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.925 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.926 malloc3 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.926 [2024-11-08 17:09:23.481157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:46.926 [2024-11-08 17:09:23.481220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.926 [2024-11-08 17:09:23.481246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:46.926 [2024-11-08 17:09:23.481257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.926 [2024-11-08 17:09:23.483590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.926 [2024-11-08 17:09:23.483632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:46.926 pt3 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.926 malloc4 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.926 [2024-11-08 17:09:23.519591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:46.926 [2024-11-08 17:09:23.519786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.926 [2024-11-08 17:09:23.519815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:46.926 [2024-11-08 17:09:23.519825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.926 [2024-11-08 17:09:23.522159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.926 [2024-11-08 17:09:23.522194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:46.926 pt4 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.926 [2024-11-08 17:09:23.527624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:46.926 [2024-11-08 17:09:23.529597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:46.926 [2024-11-08 17:09:23.529676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:46.926 [2024-11-08 17:09:23.529743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:46.926 [2024-11-08 17:09:23.529964] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:46.926 [2024-11-08 17:09:23.529975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:46.926 [2024-11-08 17:09:23.530262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:46.926 [2024-11-08 17:09:23.530427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:46.926 [2024-11-08 17:09:23.530438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:46.926 [2024-11-08 17:09:23.530591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.926 "name": "raid_bdev1", 00:21:46.926 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:46.926 "strip_size_kb": 64, 00:21:46.926 "state": "online", 00:21:46.926 "raid_level": "concat", 00:21:46.926 "superblock": true, 00:21:46.926 "num_base_bdevs": 4, 00:21:46.926 "num_base_bdevs_discovered": 4, 00:21:46.926 "num_base_bdevs_operational": 4, 00:21:46.926 "base_bdevs_list": [ 00:21:46.926 { 00:21:46.926 "name": "pt1", 00:21:46.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:46.926 "is_configured": true, 00:21:46.926 "data_offset": 2048, 00:21:46.926 "data_size": 63488 00:21:46.926 }, 00:21:46.926 { 00:21:46.926 "name": "pt2", 00:21:46.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.926 "is_configured": true, 00:21:46.926 "data_offset": 2048, 00:21:46.926 "data_size": 63488 00:21:46.926 }, 00:21:46.926 { 00:21:46.926 "name": "pt3", 00:21:46.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:46.926 "is_configured": true, 00:21:46.926 "data_offset": 2048, 00:21:46.926 "data_size": 63488 00:21:46.926 }, 00:21:46.926 { 00:21:46.926 "name": "pt4", 00:21:46.926 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:46.926 "is_configured": true, 00:21:46.926 "data_offset": 2048, 00:21:46.926 "data_size": 63488 00:21:46.926 } 00:21:46.926 ] 00:21:46.926 }' 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.926 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.184 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.184 [2024-11-08 17:09:23.880084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:47.185 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.442 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:47.442 "name": "raid_bdev1", 00:21:47.442 "aliases": [ 00:21:47.442 "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429" 00:21:47.442 ], 00:21:47.442 "product_name": "Raid Volume", 00:21:47.442 "block_size": 512, 00:21:47.442 "num_blocks": 253952, 00:21:47.443 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:47.443 "assigned_rate_limits": { 00:21:47.443 "rw_ios_per_sec": 0, 00:21:47.443 "rw_mbytes_per_sec": 0, 00:21:47.443 "r_mbytes_per_sec": 0, 00:21:47.443 "w_mbytes_per_sec": 0 00:21:47.443 }, 00:21:47.443 "claimed": false, 00:21:47.443 "zoned": false, 00:21:47.443 "supported_io_types": { 00:21:47.443 "read": true, 00:21:47.443 "write": true, 00:21:47.443 "unmap": true, 00:21:47.443 "flush": true, 00:21:47.443 "reset": true, 00:21:47.443 "nvme_admin": false, 00:21:47.443 "nvme_io": false, 00:21:47.443 "nvme_io_md": false, 00:21:47.443 "write_zeroes": true, 00:21:47.443 "zcopy": false, 00:21:47.443 "get_zone_info": false, 00:21:47.443 "zone_management": false, 00:21:47.443 "zone_append": false, 00:21:47.443 "compare": false, 00:21:47.443 "compare_and_write": false, 00:21:47.443 "abort": false, 00:21:47.443 "seek_hole": false, 00:21:47.443 "seek_data": false, 00:21:47.443 "copy": false, 00:21:47.443 "nvme_iov_md": false 00:21:47.443 }, 00:21:47.443 "memory_domains": [ 00:21:47.443 { 00:21:47.443 "dma_device_id": "system", 00:21:47.443 "dma_device_type": 1 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.443 "dma_device_type": 2 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "dma_device_id": "system", 00:21:47.443 "dma_device_type": 1 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.443 "dma_device_type": 2 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "dma_device_id": "system", 00:21:47.443 "dma_device_type": 1 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.443 "dma_device_type": 2 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "dma_device_id": "system", 00:21:47.443 "dma_device_type": 1 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.443 "dma_device_type": 2 00:21:47.443 } 00:21:47.443 ], 00:21:47.443 "driver_specific": { 00:21:47.443 "raid": { 00:21:47.443 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:47.443 "strip_size_kb": 64, 00:21:47.443 "state": "online", 00:21:47.443 "raid_level": "concat", 00:21:47.443 "superblock": true, 00:21:47.443 "num_base_bdevs": 4, 00:21:47.443 "num_base_bdevs_discovered": 4, 00:21:47.443 "num_base_bdevs_operational": 4, 00:21:47.443 "base_bdevs_list": [ 00:21:47.443 { 00:21:47.443 "name": "pt1", 00:21:47.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:47.443 "is_configured": true, 00:21:47.443 "data_offset": 2048, 00:21:47.443 "data_size": 63488 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "name": "pt2", 00:21:47.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.443 "is_configured": true, 00:21:47.443 "data_offset": 2048, 00:21:47.443 "data_size": 63488 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "name": "pt3", 00:21:47.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:47.443 "is_configured": true, 00:21:47.443 "data_offset": 2048, 00:21:47.443 "data_size": 63488 00:21:47.443 }, 00:21:47.443 { 00:21:47.443 "name": "pt4", 00:21:47.443 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:47.443 "is_configured": true, 00:21:47.443 "data_offset": 2048, 00:21:47.443 "data_size": 63488 00:21:47.443 } 00:21:47.443 ] 00:21:47.443 } 00:21:47.443 } 00:21:47.443 }' 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:47.443 pt2 00:21:47.443 pt3 00:21:47.443 pt4' 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.443 17:09:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:47.443 [2024-11-08 17:09:24.132113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:47.443 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3a9a389-38f8-4f1f-a7ec-c41f37bcf429 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b3a9a389-38f8-4f1f-a7ec-c41f37bcf429 ']' 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 [2024-11-08 17:09:24.167769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:47.702 [2024-11-08 17:09:24.167910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:47.702 [2024-11-08 17:09:24.168015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:47.702 [2024-11-08 17:09:24.168096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:47.702 [2024-11-08 17:09:24.168111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.702 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.702 [2024-11-08 17:09:24.299847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:47.702 [2024-11-08 17:09:24.301948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:47.702 [2024-11-08 17:09:24.302003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:47.702 [2024-11-08 17:09:24.302039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:47.702 [2024-11-08 17:09:24.302094] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:47.702 [2024-11-08 17:09:24.302151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:47.702 [2024-11-08 17:09:24.302172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:47.702 [2024-11-08 17:09:24.302192] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:47.702 [2024-11-08 17:09:24.302206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:47.703 [2024-11-08 17:09:24.302219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:47.703 request: 00:21:47.703 { 00:21:47.703 "name": "raid_bdev1", 00:21:47.703 "raid_level": "concat", 00:21:47.703 "base_bdevs": [ 00:21:47.703 "malloc1", 00:21:47.703 "malloc2", 00:21:47.703 "malloc3", 00:21:47.703 "malloc4" 00:21:47.703 ], 00:21:47.703 "strip_size_kb": 64, 00:21:47.703 "superblock": false, 00:21:47.703 "method": "bdev_raid_create", 00:21:47.703 "req_id": 1 00:21:47.703 } 00:21:47.703 Got JSON-RPC error response 00:21:47.703 response: 00:21:47.703 { 00:21:47.703 "code": -17, 00:21:47.703 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:47.703 } 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.703 [2024-11-08 17:09:24.347821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:47.703 [2024-11-08 17:09:24.348001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.703 [2024-11-08 17:09:24.348041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:47.703 [2024-11-08 17:09:24.348093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.703 [2024-11-08 17:09:24.350518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.703 [2024-11-08 17:09:24.350649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:47.703 [2024-11-08 17:09:24.350797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:47.703 [2024-11-08 17:09:24.350884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:47.703 pt1 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.703 "name": "raid_bdev1", 00:21:47.703 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:47.703 "strip_size_kb": 64, 00:21:47.703 "state": "configuring", 00:21:47.703 "raid_level": "concat", 00:21:47.703 "superblock": true, 00:21:47.703 "num_base_bdevs": 4, 00:21:47.703 "num_base_bdevs_discovered": 1, 00:21:47.703 "num_base_bdevs_operational": 4, 00:21:47.703 "base_bdevs_list": [ 00:21:47.703 { 00:21:47.703 "name": "pt1", 00:21:47.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:47.703 "is_configured": true, 00:21:47.703 "data_offset": 2048, 00:21:47.703 "data_size": 63488 00:21:47.703 }, 00:21:47.703 { 00:21:47.703 "name": null, 00:21:47.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.703 "is_configured": false, 00:21:47.703 "data_offset": 2048, 00:21:47.703 "data_size": 63488 00:21:47.703 }, 00:21:47.703 { 00:21:47.703 "name": null, 00:21:47.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:47.703 "is_configured": false, 00:21:47.703 "data_offset": 2048, 00:21:47.703 "data_size": 63488 00:21:47.703 }, 00:21:47.703 { 00:21:47.703 "name": null, 00:21:47.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:47.703 "is_configured": false, 00:21:47.703 "data_offset": 2048, 00:21:47.703 "data_size": 63488 00:21:47.703 } 00:21:47.703 ] 00:21:47.703 }' 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.703 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.285 [2024-11-08 17:09:24.687918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:48.285 [2024-11-08 17:09:24.687999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.285 [2024-11-08 17:09:24.688022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:48.285 [2024-11-08 17:09:24.688035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.285 [2024-11-08 17:09:24.688499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.285 [2024-11-08 17:09:24.688517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:48.285 [2024-11-08 17:09:24.688602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:48.285 [2024-11-08 17:09:24.688628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:48.285 pt2 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.285 [2024-11-08 17:09:24.695918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.285 "name": "raid_bdev1", 00:21:48.285 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:48.285 "strip_size_kb": 64, 00:21:48.285 "state": "configuring", 00:21:48.285 "raid_level": "concat", 00:21:48.285 "superblock": true, 00:21:48.285 "num_base_bdevs": 4, 00:21:48.285 "num_base_bdevs_discovered": 1, 00:21:48.285 "num_base_bdevs_operational": 4, 00:21:48.285 "base_bdevs_list": [ 00:21:48.285 { 00:21:48.285 "name": "pt1", 00:21:48.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.285 "is_configured": true, 00:21:48.285 "data_offset": 2048, 00:21:48.285 "data_size": 63488 00:21:48.285 }, 00:21:48.285 { 00:21:48.285 "name": null, 00:21:48.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.285 "is_configured": false, 00:21:48.285 "data_offset": 0, 00:21:48.285 "data_size": 63488 00:21:48.285 }, 00:21:48.285 { 00:21:48.285 "name": null, 00:21:48.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:48.285 "is_configured": false, 00:21:48.285 "data_offset": 2048, 00:21:48.285 "data_size": 63488 00:21:48.285 }, 00:21:48.285 { 00:21:48.285 "name": null, 00:21:48.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:48.285 "is_configured": false, 00:21:48.285 "data_offset": 2048, 00:21:48.285 "data_size": 63488 00:21:48.285 } 00:21:48.285 ] 00:21:48.285 }' 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.285 17:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.544 [2024-11-08 17:09:25.051990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:48.544 [2024-11-08 17:09:25.052054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.544 [2024-11-08 17:09:25.052074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:48.544 [2024-11-08 17:09:25.052083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.544 [2024-11-08 17:09:25.052550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.544 [2024-11-08 17:09:25.052564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:48.544 [2024-11-08 17:09:25.052646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:48.544 [2024-11-08 17:09:25.052668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:48.544 pt2 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.544 [2024-11-08 17:09:25.059960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:48.544 [2024-11-08 17:09:25.060008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.544 [2024-11-08 17:09:25.060031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:48.544 [2024-11-08 17:09:25.060040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.544 [2024-11-08 17:09:25.060438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.544 [2024-11-08 17:09:25.060457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:48.544 [2024-11-08 17:09:25.060523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:48.544 [2024-11-08 17:09:25.060545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:48.544 pt3 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.544 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.544 [2024-11-08 17:09:25.067934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:48.544 [2024-11-08 17:09:25.067979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.544 [2024-11-08 17:09:25.067997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:48.544 [2024-11-08 17:09:25.068005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.544 [2024-11-08 17:09:25.068388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.544 [2024-11-08 17:09:25.068407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:48.544 [2024-11-08 17:09:25.068468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:48.544 [2024-11-08 17:09:25.068485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:48.544 [2024-11-08 17:09:25.068623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:48.544 [2024-11-08 17:09:25.068637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:48.544 [2024-11-08 17:09:25.068910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:48.544 [2024-11-08 17:09:25.069045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:48.545 [2024-11-08 17:09:25.069056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:48.545 [2024-11-08 17:09:25.069181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.545 pt4 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.545 "name": "raid_bdev1", 00:21:48.545 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:48.545 "strip_size_kb": 64, 00:21:48.545 "state": "online", 00:21:48.545 "raid_level": "concat", 00:21:48.545 "superblock": true, 00:21:48.545 "num_base_bdevs": 4, 00:21:48.545 "num_base_bdevs_discovered": 4, 00:21:48.545 "num_base_bdevs_operational": 4, 00:21:48.545 "base_bdevs_list": [ 00:21:48.545 { 00:21:48.545 "name": "pt1", 00:21:48.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.545 "is_configured": true, 00:21:48.545 "data_offset": 2048, 00:21:48.545 "data_size": 63488 00:21:48.545 }, 00:21:48.545 { 00:21:48.545 "name": "pt2", 00:21:48.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.545 "is_configured": true, 00:21:48.545 "data_offset": 2048, 00:21:48.545 "data_size": 63488 00:21:48.545 }, 00:21:48.545 { 00:21:48.545 "name": "pt3", 00:21:48.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:48.545 "is_configured": true, 00:21:48.545 "data_offset": 2048, 00:21:48.545 "data_size": 63488 00:21:48.545 }, 00:21:48.545 { 00:21:48.545 "name": "pt4", 00:21:48.545 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:48.545 "is_configured": true, 00:21:48.545 "data_offset": 2048, 00:21:48.545 "data_size": 63488 00:21:48.545 } 00:21:48.545 ] 00:21:48.545 }' 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.545 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.803 [2024-11-08 17:09:25.416432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.803 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:48.803 "name": "raid_bdev1", 00:21:48.803 "aliases": [ 00:21:48.803 "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429" 00:21:48.803 ], 00:21:48.803 "product_name": "Raid Volume", 00:21:48.803 "block_size": 512, 00:21:48.803 "num_blocks": 253952, 00:21:48.803 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:48.803 "assigned_rate_limits": { 00:21:48.803 "rw_ios_per_sec": 0, 00:21:48.803 "rw_mbytes_per_sec": 0, 00:21:48.803 "r_mbytes_per_sec": 0, 00:21:48.803 "w_mbytes_per_sec": 0 00:21:48.803 }, 00:21:48.803 "claimed": false, 00:21:48.803 "zoned": false, 00:21:48.803 "supported_io_types": { 00:21:48.803 "read": true, 00:21:48.803 "write": true, 00:21:48.803 "unmap": true, 00:21:48.803 "flush": true, 00:21:48.803 "reset": true, 00:21:48.803 "nvme_admin": false, 00:21:48.803 "nvme_io": false, 00:21:48.803 "nvme_io_md": false, 00:21:48.803 "write_zeroes": true, 00:21:48.803 "zcopy": false, 00:21:48.803 "get_zone_info": false, 00:21:48.803 "zone_management": false, 00:21:48.803 "zone_append": false, 00:21:48.803 "compare": false, 00:21:48.803 "compare_and_write": false, 00:21:48.803 "abort": false, 00:21:48.803 "seek_hole": false, 00:21:48.803 "seek_data": false, 00:21:48.803 "copy": false, 00:21:48.803 "nvme_iov_md": false 00:21:48.803 }, 00:21:48.803 "memory_domains": [ 00:21:48.803 { 00:21:48.803 "dma_device_id": "system", 00:21:48.803 "dma_device_type": 1 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.803 "dma_device_type": 2 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "dma_device_id": "system", 00:21:48.803 "dma_device_type": 1 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.803 "dma_device_type": 2 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "dma_device_id": "system", 00:21:48.803 "dma_device_type": 1 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.803 "dma_device_type": 2 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "dma_device_id": "system", 00:21:48.803 "dma_device_type": 1 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.803 "dma_device_type": 2 00:21:48.803 } 00:21:48.803 ], 00:21:48.803 "driver_specific": { 00:21:48.803 "raid": { 00:21:48.803 "uuid": "b3a9a389-38f8-4f1f-a7ec-c41f37bcf429", 00:21:48.803 "strip_size_kb": 64, 00:21:48.803 "state": "online", 00:21:48.803 "raid_level": "concat", 00:21:48.803 "superblock": true, 00:21:48.803 "num_base_bdevs": 4, 00:21:48.803 "num_base_bdevs_discovered": 4, 00:21:48.803 "num_base_bdevs_operational": 4, 00:21:48.803 "base_bdevs_list": [ 00:21:48.803 { 00:21:48.803 "name": "pt1", 00:21:48.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.803 "is_configured": true, 00:21:48.803 "data_offset": 2048, 00:21:48.803 "data_size": 63488 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "name": "pt2", 00:21:48.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.803 "is_configured": true, 00:21:48.803 "data_offset": 2048, 00:21:48.803 "data_size": 63488 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "name": "pt3", 00:21:48.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:48.803 "is_configured": true, 00:21:48.803 "data_offset": 2048, 00:21:48.803 "data_size": 63488 00:21:48.803 }, 00:21:48.803 { 00:21:48.803 "name": "pt4", 00:21:48.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:48.803 "is_configured": true, 00:21:48.803 "data_offset": 2048, 00:21:48.803 "data_size": 63488 00:21:48.804 } 00:21:48.804 ] 00:21:48.804 } 00:21:48.804 } 00:21:48.804 }' 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:48.804 pt2 00:21:48.804 pt3 00:21:48.804 pt4' 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.804 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.061 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.062 [2024-11-08 17:09:25.648451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b3a9a389-38f8-4f1f-a7ec-c41f37bcf429 '!=' b3a9a389-38f8-4f1f-a7ec-c41f37bcf429 ']' 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71133 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 71133 ']' 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 71133 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71133 00:21:49.062 killing process with pid 71133 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71133' 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 71133 00:21:49.062 [2024-11-08 17:09:25.699717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.062 17:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 71133 00:21:49.062 [2024-11-08 17:09:25.699832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.062 [2024-11-08 17:09:25.699922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.062 [2024-11-08 17:09:25.699932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:49.319 [2024-11-08 17:09:25.958889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:50.254 17:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:50.254 00:21:50.254 real 0m4.288s 00:21:50.254 user 0m6.114s 00:21:50.254 sys 0m0.713s 00:21:50.254 ************************************ 00:21:50.254 END TEST raid_superblock_test 00:21:50.254 ************************************ 00:21:50.254 17:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:50.254 17:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.254 17:09:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:21:50.254 17:09:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:50.254 17:09:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:50.254 17:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:50.254 ************************************ 00:21:50.254 START TEST raid_read_error_test 00:21:50.254 ************************************ 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:50.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kK0OzPomSx 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71381 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71381 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71381 ']' 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:50.254 17:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.255 17:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:50.255 [2024-11-08 17:09:26.878691] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:50.255 [2024-11-08 17:09:26.878845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:21:50.513 [2024-11-08 17:09:27.041573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.513 [2024-11-08 17:09:27.161308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.770 [2024-11-08 17:09:27.312388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:50.770 [2024-11-08 17:09:27.312598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.028 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:51.028 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:21:51.028 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:51.028 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:51.028 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.028 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.286 BaseBdev1_malloc 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.286 true 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.286 [2024-11-08 17:09:27.773257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:51.286 [2024-11-08 17:09:27.773315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.286 [2024-11-08 17:09:27.773337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:51.286 [2024-11-08 17:09:27.773349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.286 [2024-11-08 17:09:27.775642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.286 [2024-11-08 17:09:27.775831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:51.286 BaseBdev1 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:51.286 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 BaseBdev2_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 true 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 [2024-11-08 17:09:27.819293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:51.287 [2024-11-08 17:09:27.819452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.287 [2024-11-08 17:09:27.819475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:51.287 [2024-11-08 17:09:27.819486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.287 [2024-11-08 17:09:27.821772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.287 [2024-11-08 17:09:27.821817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:51.287 BaseBdev2 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 BaseBdev3_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 true 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 [2024-11-08 17:09:27.893310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:51.287 [2024-11-08 17:09:27.893429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.287 [2024-11-08 17:09:27.893473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:51.287 [2024-11-08 17:09:27.893496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.287 [2024-11-08 17:09:27.896999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.287 [2024-11-08 17:09:27.897075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:51.287 BaseBdev3 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 BaseBdev4_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 true 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 [2024-11-08 17:09:27.961728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:51.287 [2024-11-08 17:09:27.961848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.287 [2024-11-08 17:09:27.961887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:51.287 [2024-11-08 17:09:27.961905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.287 [2024-11-08 17:09:27.965278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.287 [2024-11-08 17:09:27.965362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:51.287 BaseBdev4 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 [2024-11-08 17:09:27.969917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:51.287 [2024-11-08 17:09:27.972842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.287 [2024-11-08 17:09:27.972999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:51.287 [2024-11-08 17:09:27.973124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:51.287 [2024-11-08 17:09:27.973501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:21:51.287 [2024-11-08 17:09:27.973531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:51.287 [2024-11-08 17:09:27.974002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:21:51.287 [2024-11-08 17:09:27.974259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:21:51.287 [2024-11-08 17:09:27.974281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:21:51.287 [2024-11-08 17:09:27.974637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.287 17:09:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.547 17:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.547 "name": "raid_bdev1", 00:21:51.547 "uuid": "a7060496-81c4-49a1-82ed-d34bfc0c28e1", 00:21:51.547 "strip_size_kb": 64, 00:21:51.547 "state": "online", 00:21:51.547 "raid_level": "concat", 00:21:51.547 "superblock": true, 00:21:51.547 "num_base_bdevs": 4, 00:21:51.547 "num_base_bdevs_discovered": 4, 00:21:51.547 "num_base_bdevs_operational": 4, 00:21:51.547 "base_bdevs_list": [ 00:21:51.547 { 00:21:51.547 "name": "BaseBdev1", 00:21:51.547 "uuid": "ed776597-3360-5aa7-b2ad-b7bfd0a47e33", 00:21:51.547 "is_configured": true, 00:21:51.547 "data_offset": 2048, 00:21:51.547 "data_size": 63488 00:21:51.547 }, 00:21:51.547 { 00:21:51.547 "name": "BaseBdev2", 00:21:51.547 "uuid": "72cc5442-3195-539a-ae1a-b58ae59589ce", 00:21:51.547 "is_configured": true, 00:21:51.547 "data_offset": 2048, 00:21:51.547 "data_size": 63488 00:21:51.547 }, 00:21:51.547 { 00:21:51.547 "name": "BaseBdev3", 00:21:51.547 "uuid": "e136bfde-05b2-53c1-b3a0-57481b655129", 00:21:51.547 "is_configured": true, 00:21:51.547 "data_offset": 2048, 00:21:51.547 "data_size": 63488 00:21:51.547 }, 00:21:51.547 { 00:21:51.547 "name": "BaseBdev4", 00:21:51.547 "uuid": "1c17a483-766f-5c56-82d9-3cccbcd96eb0", 00:21:51.547 "is_configured": true, 00:21:51.547 "data_offset": 2048, 00:21:51.547 "data_size": 63488 00:21:51.547 } 00:21:51.548 ] 00:21:51.548 }' 00:21:51.548 17:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.548 17:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.806 17:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:51.806 17:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:51.806 [2024-11-08 17:09:28.403670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.739 "name": "raid_bdev1", 00:21:52.739 "uuid": "a7060496-81c4-49a1-82ed-d34bfc0c28e1", 00:21:52.739 "strip_size_kb": 64, 00:21:52.739 "state": "online", 00:21:52.739 "raid_level": "concat", 00:21:52.739 "superblock": true, 00:21:52.739 "num_base_bdevs": 4, 00:21:52.739 "num_base_bdevs_discovered": 4, 00:21:52.739 "num_base_bdevs_operational": 4, 00:21:52.739 "base_bdevs_list": [ 00:21:52.739 { 00:21:52.739 "name": "BaseBdev1", 00:21:52.739 "uuid": "ed776597-3360-5aa7-b2ad-b7bfd0a47e33", 00:21:52.739 "is_configured": true, 00:21:52.739 "data_offset": 2048, 00:21:52.739 "data_size": 63488 00:21:52.739 }, 00:21:52.739 { 00:21:52.739 "name": "BaseBdev2", 00:21:52.739 "uuid": "72cc5442-3195-539a-ae1a-b58ae59589ce", 00:21:52.739 "is_configured": true, 00:21:52.739 "data_offset": 2048, 00:21:52.739 "data_size": 63488 00:21:52.739 }, 00:21:52.739 { 00:21:52.739 "name": "BaseBdev3", 00:21:52.739 "uuid": "e136bfde-05b2-53c1-b3a0-57481b655129", 00:21:52.739 "is_configured": true, 00:21:52.739 "data_offset": 2048, 00:21:52.739 "data_size": 63488 00:21:52.739 }, 00:21:52.739 { 00:21:52.739 "name": "BaseBdev4", 00:21:52.739 "uuid": "1c17a483-766f-5c56-82d9-3cccbcd96eb0", 00:21:52.739 "is_configured": true, 00:21:52.739 "data_offset": 2048, 00:21:52.739 "data_size": 63488 00:21:52.739 } 00:21:52.739 ] 00:21:52.739 }' 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.739 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.998 [2024-11-08 17:09:29.637736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.998 [2024-11-08 17:09:29.637786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.998 [2024-11-08 17:09:29.640861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.998 [2024-11-08 17:09:29.640932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.998 [2024-11-08 17:09:29.640980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.998 [2024-11-08 17:09:29.640993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:21:52.998 { 00:21:52.998 "results": [ 00:21:52.998 { 00:21:52.998 "job": "raid_bdev1", 00:21:52.998 "core_mask": "0x1", 00:21:52.998 "workload": "randrw", 00:21:52.998 "percentage": 50, 00:21:52.998 "status": "finished", 00:21:52.998 "queue_depth": 1, 00:21:52.998 "io_size": 131072, 00:21:52.998 "runtime": 1.23198, 00:21:52.998 "iops": 13695.027516680466, 00:21:52.998 "mibps": 1711.8784395850582, 00:21:52.998 "io_failed": 1, 00:21:52.998 "io_timeout": 0, 00:21:52.998 "avg_latency_us": 100.49780103852764, 00:21:52.998 "min_latency_us": 33.47692307692308, 00:21:52.998 "max_latency_us": 1688.8123076923077 00:21:52.998 } 00:21:52.998 ], 00:21:52.998 "core_count": 1 00:21:52.998 } 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71381 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71381 ']' 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71381 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71381 00:21:52.998 killing process with pid 71381 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71381' 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71381 00:21:52.998 [2024-11-08 17:09:29.666844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.998 17:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71381 00:21:53.286 [2024-11-08 17:09:29.877035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kK0OzPomSx 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:21:54.228 00:21:54.228 real 0m3.915s 00:21:54.228 user 0m4.539s 00:21:54.228 sys 0m0.485s 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:54.228 17:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.228 ************************************ 00:21:54.228 END TEST raid_read_error_test 00:21:54.228 ************************************ 00:21:54.228 17:09:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:21:54.228 17:09:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:54.228 17:09:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:54.228 17:09:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:54.228 ************************************ 00:21:54.228 START TEST raid_write_error_test 00:21:54.228 ************************************ 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5rcbQNltsD 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71521 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71521 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71521 ']' 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:54.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.228 17:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:54.228 [2024-11-08 17:09:30.867649] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:54.228 [2024-11-08 17:09:30.867806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71521 ] 00:21:54.486 [2024-11-08 17:09:31.031820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.486 [2024-11-08 17:09:31.152911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.745 [2024-11-08 17:09:31.302082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.745 [2024-11-08 17:09:31.302139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.043 BaseBdev1_malloc 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.043 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 true 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 [2024-11-08 17:09:31.764802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:55.302 [2024-11-08 17:09:31.764863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.302 [2024-11-08 17:09:31.764885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:55.302 [2024-11-08 17:09:31.764897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.302 [2024-11-08 17:09:31.767230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.302 [2024-11-08 17:09:31.767271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:55.302 BaseBdev1 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 BaseBdev2_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 true 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 [2024-11-08 17:09:31.819695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:55.302 [2024-11-08 17:09:31.819769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.302 [2024-11-08 17:09:31.819788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:55.302 [2024-11-08 17:09:31.819800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.302 [2024-11-08 17:09:31.822165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.302 [2024-11-08 17:09:31.822206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:55.302 BaseBdev2 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 BaseBdev3_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 true 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 [2024-11-08 17:09:31.885734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:55.302 [2024-11-08 17:09:31.885811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.302 [2024-11-08 17:09:31.885833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:55.302 [2024-11-08 17:09:31.885844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.302 [2024-11-08 17:09:31.888199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.302 [2024-11-08 17:09:31.888236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:55.302 BaseBdev3 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 BaseBdev4_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 true 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 [2024-11-08 17:09:31.936781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:55.302 [2024-11-08 17:09:31.936951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.302 [2024-11-08 17:09:31.936978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:55.302 [2024-11-08 17:09:31.936990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.302 [2024-11-08 17:09:31.939290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.302 [2024-11-08 17:09:31.939330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:55.302 BaseBdev4 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.302 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.302 [2024-11-08 17:09:31.944858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:55.303 [2024-11-08 17:09:31.946927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:55.303 [2024-11-08 17:09:31.947007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:55.303 [2024-11-08 17:09:31.947078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:55.303 [2024-11-08 17:09:31.947312] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:21:55.303 [2024-11-08 17:09:31.947325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:55.303 [2024-11-08 17:09:31.947595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:21:55.303 [2024-11-08 17:09:31.947749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:21:55.303 [2024-11-08 17:09:31.947774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:21:55.303 [2024-11-08 17:09:31.947932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.303 "name": "raid_bdev1", 00:21:55.303 "uuid": "e1c712f7-fd36-4ab5-9cc8-abbe76572054", 00:21:55.303 "strip_size_kb": 64, 00:21:55.303 "state": "online", 00:21:55.303 "raid_level": "concat", 00:21:55.303 "superblock": true, 00:21:55.303 "num_base_bdevs": 4, 00:21:55.303 "num_base_bdevs_discovered": 4, 00:21:55.303 "num_base_bdevs_operational": 4, 00:21:55.303 "base_bdevs_list": [ 00:21:55.303 { 00:21:55.303 "name": "BaseBdev1", 00:21:55.303 "uuid": "35a7ad8e-e5f5-5a94-84c0-0c10d52329cf", 00:21:55.303 "is_configured": true, 00:21:55.303 "data_offset": 2048, 00:21:55.303 "data_size": 63488 00:21:55.303 }, 00:21:55.303 { 00:21:55.303 "name": "BaseBdev2", 00:21:55.303 "uuid": "435c29be-50b8-5193-8f9d-8c1cd0d1900e", 00:21:55.303 "is_configured": true, 00:21:55.303 "data_offset": 2048, 00:21:55.303 "data_size": 63488 00:21:55.303 }, 00:21:55.303 { 00:21:55.303 "name": "BaseBdev3", 00:21:55.303 "uuid": "e24caa79-0f15-5973-bc94-1c8303d5d04f", 00:21:55.303 "is_configured": true, 00:21:55.303 "data_offset": 2048, 00:21:55.303 "data_size": 63488 00:21:55.303 }, 00:21:55.303 { 00:21:55.303 "name": "BaseBdev4", 00:21:55.303 "uuid": "155ce1e4-0a99-54d4-b8aa-5c4fd5b16307", 00:21:55.303 "is_configured": true, 00:21:55.303 "data_offset": 2048, 00:21:55.303 "data_size": 63488 00:21:55.303 } 00:21:55.303 ] 00:21:55.303 }' 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.303 17:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.561 17:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:55.561 17:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:55.819 [2024-11-08 17:09:32.362169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.771 "name": "raid_bdev1", 00:21:56.771 "uuid": "e1c712f7-fd36-4ab5-9cc8-abbe76572054", 00:21:56.771 "strip_size_kb": 64, 00:21:56.771 "state": "online", 00:21:56.771 "raid_level": "concat", 00:21:56.771 "superblock": true, 00:21:56.771 "num_base_bdevs": 4, 00:21:56.771 "num_base_bdevs_discovered": 4, 00:21:56.771 "num_base_bdevs_operational": 4, 00:21:56.771 "base_bdevs_list": [ 00:21:56.771 { 00:21:56.771 "name": "BaseBdev1", 00:21:56.771 "uuid": "35a7ad8e-e5f5-5a94-84c0-0c10d52329cf", 00:21:56.771 "is_configured": true, 00:21:56.771 "data_offset": 2048, 00:21:56.771 "data_size": 63488 00:21:56.771 }, 00:21:56.771 { 00:21:56.771 "name": "BaseBdev2", 00:21:56.771 "uuid": "435c29be-50b8-5193-8f9d-8c1cd0d1900e", 00:21:56.771 "is_configured": true, 00:21:56.771 "data_offset": 2048, 00:21:56.771 "data_size": 63488 00:21:56.771 }, 00:21:56.771 { 00:21:56.771 "name": "BaseBdev3", 00:21:56.771 "uuid": "e24caa79-0f15-5973-bc94-1c8303d5d04f", 00:21:56.771 "is_configured": true, 00:21:56.771 "data_offset": 2048, 00:21:56.771 "data_size": 63488 00:21:56.771 }, 00:21:56.771 { 00:21:56.771 "name": "BaseBdev4", 00:21:56.771 "uuid": "155ce1e4-0a99-54d4-b8aa-5c4fd5b16307", 00:21:56.771 "is_configured": true, 00:21:56.771 "data_offset": 2048, 00:21:56.771 "data_size": 63488 00:21:56.771 } 00:21:56.771 ] 00:21:56.771 }' 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.771 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.029 [2024-11-08 17:09:33.608791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.029 [2024-11-08 17:09:33.608826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.029 [2024-11-08 17:09:33.611927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.029 [2024-11-08 17:09:33.611996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.029 [2024-11-08 17:09:33.612047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.029 [2024-11-08 17:09:33.612061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:21:57.029 { 00:21:57.029 "results": [ 00:21:57.029 { 00:21:57.029 "job": "raid_bdev1", 00:21:57.029 "core_mask": "0x1", 00:21:57.029 "workload": "randrw", 00:21:57.029 "percentage": 50, 00:21:57.029 "status": "finished", 00:21:57.029 "queue_depth": 1, 00:21:57.029 "io_size": 131072, 00:21:57.029 "runtime": 1.244446, 00:21:57.029 "iops": 13720.161421226794, 00:21:57.029 "mibps": 1715.0201776533493, 00:21:57.029 "io_failed": 1, 00:21:57.029 "io_timeout": 0, 00:21:57.029 "avg_latency_us": 100.43918819686901, 00:21:57.029 "min_latency_us": 33.47692307692308, 00:21:57.029 "max_latency_us": 1726.6215384615384 00:21:57.029 } 00:21:57.029 ], 00:21:57.029 "core_count": 1 00:21:57.029 } 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71521 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71521 ']' 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71521 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71521 00:21:57.029 killing process with pid 71521 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71521' 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71521 00:21:57.029 17:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71521 00:21:57.029 [2024-11-08 17:09:33.645711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.289 [2024-11-08 17:09:33.864536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5rcbQNltsD 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:21:58.225 00:21:58.225 real 0m3.898s 00:21:58.225 user 0m4.540s 00:21:58.225 sys 0m0.474s 00:21:58.225 ************************************ 00:21:58.225 END TEST raid_write_error_test 00:21:58.225 ************************************ 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:58.225 17:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.225 17:09:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:58.225 17:09:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:21:58.225 17:09:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:58.225 17:09:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:58.225 17:09:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:58.225 ************************************ 00:21:58.225 START TEST raid_state_function_test 00:21:58.225 ************************************ 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:58.225 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:58.226 Process raid pid: 71659 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71659 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71659' 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71659 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71659 ']' 00:21:58.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:58.226 17:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.226 [2024-11-08 17:09:34.819100] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:21:58.226 [2024-11-08 17:09:34.819246] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.486 [2024-11-08 17:09:34.982150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.487 [2024-11-08 17:09:35.126688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.747 [2024-11-08 17:09:35.278332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:58.747 [2024-11-08 17:09:35.278384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.008 [2024-11-08 17:09:35.703044] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:59.008 [2024-11-08 17:09:35.703102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:59.008 [2024-11-08 17:09:35.703112] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.008 [2024-11-08 17:09:35.703122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.008 [2024-11-08 17:09:35.703128] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.008 [2024-11-08 17:09:35.703137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.008 [2024-11-08 17:09:35.703143] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:59.008 [2024-11-08 17:09:35.703152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.008 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.270 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.270 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.270 "name": "Existed_Raid", 00:21:59.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.270 "strip_size_kb": 0, 00:21:59.270 "state": "configuring", 00:21:59.270 "raid_level": "raid1", 00:21:59.270 "superblock": false, 00:21:59.270 "num_base_bdevs": 4, 00:21:59.270 "num_base_bdevs_discovered": 0, 00:21:59.270 "num_base_bdevs_operational": 4, 00:21:59.270 "base_bdevs_list": [ 00:21:59.270 { 00:21:59.270 "name": "BaseBdev1", 00:21:59.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.270 "is_configured": false, 00:21:59.270 "data_offset": 0, 00:21:59.270 "data_size": 0 00:21:59.270 }, 00:21:59.270 { 00:21:59.270 "name": "BaseBdev2", 00:21:59.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.270 "is_configured": false, 00:21:59.270 "data_offset": 0, 00:21:59.270 "data_size": 0 00:21:59.270 }, 00:21:59.270 { 00:21:59.270 "name": "BaseBdev3", 00:21:59.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.270 "is_configured": false, 00:21:59.270 "data_offset": 0, 00:21:59.270 "data_size": 0 00:21:59.270 }, 00:21:59.270 { 00:21:59.270 "name": "BaseBdev4", 00:21:59.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.270 "is_configured": false, 00:21:59.270 "data_offset": 0, 00:21:59.270 "data_size": 0 00:21:59.270 } 00:21:59.270 ] 00:21:59.270 }' 00:21:59.270 17:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.270 17:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.532 [2024-11-08 17:09:36.031086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:59.532 [2024-11-08 17:09:36.031134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.532 [2024-11-08 17:09:36.043084] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:59.532 [2024-11-08 17:09:36.043129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:59.532 [2024-11-08 17:09:36.043139] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.532 [2024-11-08 17:09:36.043149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.532 [2024-11-08 17:09:36.043156] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.532 [2024-11-08 17:09:36.043166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.532 [2024-11-08 17:09:36.043173] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:59.532 [2024-11-08 17:09:36.043182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.532 [2024-11-08 17:09:36.078059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.532 BaseBdev1 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:59.532 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.533 [ 00:21:59.533 { 00:21:59.533 "name": "BaseBdev1", 00:21:59.533 "aliases": [ 00:21:59.533 "14171d51-319d-4516-9733-bea2688f4d51" 00:21:59.533 ], 00:21:59.533 "product_name": "Malloc disk", 00:21:59.533 "block_size": 512, 00:21:59.533 "num_blocks": 65536, 00:21:59.533 "uuid": "14171d51-319d-4516-9733-bea2688f4d51", 00:21:59.533 "assigned_rate_limits": { 00:21:59.533 "rw_ios_per_sec": 0, 00:21:59.533 "rw_mbytes_per_sec": 0, 00:21:59.533 "r_mbytes_per_sec": 0, 00:21:59.533 "w_mbytes_per_sec": 0 00:21:59.533 }, 00:21:59.533 "claimed": true, 00:21:59.533 "claim_type": "exclusive_write", 00:21:59.533 "zoned": false, 00:21:59.533 "supported_io_types": { 00:21:59.533 "read": true, 00:21:59.533 "write": true, 00:21:59.533 "unmap": true, 00:21:59.533 "flush": true, 00:21:59.533 "reset": true, 00:21:59.533 "nvme_admin": false, 00:21:59.533 "nvme_io": false, 00:21:59.533 "nvme_io_md": false, 00:21:59.533 "write_zeroes": true, 00:21:59.533 "zcopy": true, 00:21:59.533 "get_zone_info": false, 00:21:59.533 "zone_management": false, 00:21:59.533 "zone_append": false, 00:21:59.533 "compare": false, 00:21:59.533 "compare_and_write": false, 00:21:59.533 "abort": true, 00:21:59.533 "seek_hole": false, 00:21:59.533 "seek_data": false, 00:21:59.533 "copy": true, 00:21:59.533 "nvme_iov_md": false 00:21:59.533 }, 00:21:59.533 "memory_domains": [ 00:21:59.533 { 00:21:59.533 "dma_device_id": "system", 00:21:59.533 "dma_device_type": 1 00:21:59.533 }, 00:21:59.533 { 00:21:59.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.533 "dma_device_type": 2 00:21:59.533 } 00:21:59.533 ], 00:21:59.533 "driver_specific": {} 00:21:59.533 } 00:21:59.533 ] 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.533 "name": "Existed_Raid", 00:21:59.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.533 "strip_size_kb": 0, 00:21:59.533 "state": "configuring", 00:21:59.533 "raid_level": "raid1", 00:21:59.533 "superblock": false, 00:21:59.533 "num_base_bdevs": 4, 00:21:59.533 "num_base_bdevs_discovered": 1, 00:21:59.533 "num_base_bdevs_operational": 4, 00:21:59.533 "base_bdevs_list": [ 00:21:59.533 { 00:21:59.533 "name": "BaseBdev1", 00:21:59.533 "uuid": "14171d51-319d-4516-9733-bea2688f4d51", 00:21:59.533 "is_configured": true, 00:21:59.533 "data_offset": 0, 00:21:59.533 "data_size": 65536 00:21:59.533 }, 00:21:59.533 { 00:21:59.533 "name": "BaseBdev2", 00:21:59.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.533 "is_configured": false, 00:21:59.533 "data_offset": 0, 00:21:59.533 "data_size": 0 00:21:59.533 }, 00:21:59.533 { 00:21:59.533 "name": "BaseBdev3", 00:21:59.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.533 "is_configured": false, 00:21:59.533 "data_offset": 0, 00:21:59.533 "data_size": 0 00:21:59.533 }, 00:21:59.533 { 00:21:59.533 "name": "BaseBdev4", 00:21:59.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.533 "is_configured": false, 00:21:59.533 "data_offset": 0, 00:21:59.533 "data_size": 0 00:21:59.533 } 00:21:59.533 ] 00:21:59.533 }' 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.533 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.794 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:59.794 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.794 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.794 [2024-11-08 17:09:36.446741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:59.794 [2024-11-08 17:09:36.446812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:59.794 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.794 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:59.794 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.794 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.794 [2024-11-08 17:09:36.454847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.794 [2024-11-08 17:09:36.456877] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.795 [2024-11-08 17:09:36.456923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.795 [2024-11-08 17:09:36.456933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.795 [2024-11-08 17:09:36.456944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.795 [2024-11-08 17:09:36.456951] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:59.795 [2024-11-08 17:09:36.456960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.795 "name": "Existed_Raid", 00:21:59.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.795 "strip_size_kb": 0, 00:21:59.795 "state": "configuring", 00:21:59.795 "raid_level": "raid1", 00:21:59.795 "superblock": false, 00:21:59.795 "num_base_bdevs": 4, 00:21:59.795 "num_base_bdevs_discovered": 1, 00:21:59.795 "num_base_bdevs_operational": 4, 00:21:59.795 "base_bdevs_list": [ 00:21:59.795 { 00:21:59.795 "name": "BaseBdev1", 00:21:59.795 "uuid": "14171d51-319d-4516-9733-bea2688f4d51", 00:21:59.795 "is_configured": true, 00:21:59.795 "data_offset": 0, 00:21:59.795 "data_size": 65536 00:21:59.795 }, 00:21:59.795 { 00:21:59.795 "name": "BaseBdev2", 00:21:59.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.795 "is_configured": false, 00:21:59.795 "data_offset": 0, 00:21:59.795 "data_size": 0 00:21:59.795 }, 00:21:59.795 { 00:21:59.795 "name": "BaseBdev3", 00:21:59.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.795 "is_configured": false, 00:21:59.795 "data_offset": 0, 00:21:59.795 "data_size": 0 00:21:59.795 }, 00:21:59.795 { 00:21:59.795 "name": "BaseBdev4", 00:21:59.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.795 "is_configured": false, 00:21:59.795 "data_offset": 0, 00:21:59.795 "data_size": 0 00:21:59.795 } 00:21:59.795 ] 00:21:59.795 }' 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.795 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.366 [2024-11-08 17:09:36.800377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.366 BaseBdev2 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.366 [ 00:22:00.366 { 00:22:00.366 "name": "BaseBdev2", 00:22:00.366 "aliases": [ 00:22:00.366 "908b2688-3810-44f1-bf37-a95d46ee09b2" 00:22:00.366 ], 00:22:00.366 "product_name": "Malloc disk", 00:22:00.366 "block_size": 512, 00:22:00.366 "num_blocks": 65536, 00:22:00.366 "uuid": "908b2688-3810-44f1-bf37-a95d46ee09b2", 00:22:00.366 "assigned_rate_limits": { 00:22:00.366 "rw_ios_per_sec": 0, 00:22:00.366 "rw_mbytes_per_sec": 0, 00:22:00.366 "r_mbytes_per_sec": 0, 00:22:00.366 "w_mbytes_per_sec": 0 00:22:00.366 }, 00:22:00.366 "claimed": true, 00:22:00.366 "claim_type": "exclusive_write", 00:22:00.366 "zoned": false, 00:22:00.366 "supported_io_types": { 00:22:00.366 "read": true, 00:22:00.366 "write": true, 00:22:00.366 "unmap": true, 00:22:00.366 "flush": true, 00:22:00.366 "reset": true, 00:22:00.366 "nvme_admin": false, 00:22:00.366 "nvme_io": false, 00:22:00.366 "nvme_io_md": false, 00:22:00.366 "write_zeroes": true, 00:22:00.366 "zcopy": true, 00:22:00.366 "get_zone_info": false, 00:22:00.366 "zone_management": false, 00:22:00.366 "zone_append": false, 00:22:00.366 "compare": false, 00:22:00.366 "compare_and_write": false, 00:22:00.366 "abort": true, 00:22:00.366 "seek_hole": false, 00:22:00.366 "seek_data": false, 00:22:00.366 "copy": true, 00:22:00.366 "nvme_iov_md": false 00:22:00.366 }, 00:22:00.366 "memory_domains": [ 00:22:00.366 { 00:22:00.366 "dma_device_id": "system", 00:22:00.366 "dma_device_type": 1 00:22:00.366 }, 00:22:00.366 { 00:22:00.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.366 "dma_device_type": 2 00:22:00.366 } 00:22:00.366 ], 00:22:00.366 "driver_specific": {} 00:22:00.366 } 00:22:00.366 ] 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.366 "name": "Existed_Raid", 00:22:00.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.366 "strip_size_kb": 0, 00:22:00.366 "state": "configuring", 00:22:00.366 "raid_level": "raid1", 00:22:00.366 "superblock": false, 00:22:00.366 "num_base_bdevs": 4, 00:22:00.366 "num_base_bdevs_discovered": 2, 00:22:00.366 "num_base_bdevs_operational": 4, 00:22:00.366 "base_bdevs_list": [ 00:22:00.366 { 00:22:00.366 "name": "BaseBdev1", 00:22:00.366 "uuid": "14171d51-319d-4516-9733-bea2688f4d51", 00:22:00.366 "is_configured": true, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 65536 00:22:00.366 }, 00:22:00.366 { 00:22:00.366 "name": "BaseBdev2", 00:22:00.366 "uuid": "908b2688-3810-44f1-bf37-a95d46ee09b2", 00:22:00.366 "is_configured": true, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 65536 00:22:00.366 }, 00:22:00.366 { 00:22:00.366 "name": "BaseBdev3", 00:22:00.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.366 "is_configured": false, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 0 00:22:00.366 }, 00:22:00.366 { 00:22:00.366 "name": "BaseBdev4", 00:22:00.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.366 "is_configured": false, 00:22:00.366 "data_offset": 0, 00:22:00.366 "data_size": 0 00:22:00.366 } 00:22:00.366 ] 00:22:00.366 }' 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.366 17:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.628 [2024-11-08 17:09:37.205434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:00.628 BaseBdev3 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.628 [ 00:22:00.628 { 00:22:00.628 "name": "BaseBdev3", 00:22:00.628 "aliases": [ 00:22:00.628 "a534ab19-0ce4-4612-9215-84f5073cf52c" 00:22:00.628 ], 00:22:00.628 "product_name": "Malloc disk", 00:22:00.628 "block_size": 512, 00:22:00.628 "num_blocks": 65536, 00:22:00.628 "uuid": "a534ab19-0ce4-4612-9215-84f5073cf52c", 00:22:00.628 "assigned_rate_limits": { 00:22:00.628 "rw_ios_per_sec": 0, 00:22:00.628 "rw_mbytes_per_sec": 0, 00:22:00.628 "r_mbytes_per_sec": 0, 00:22:00.628 "w_mbytes_per_sec": 0 00:22:00.628 }, 00:22:00.628 "claimed": true, 00:22:00.628 "claim_type": "exclusive_write", 00:22:00.628 "zoned": false, 00:22:00.628 "supported_io_types": { 00:22:00.628 "read": true, 00:22:00.628 "write": true, 00:22:00.628 "unmap": true, 00:22:00.628 "flush": true, 00:22:00.628 "reset": true, 00:22:00.628 "nvme_admin": false, 00:22:00.628 "nvme_io": false, 00:22:00.628 "nvme_io_md": false, 00:22:00.628 "write_zeroes": true, 00:22:00.628 "zcopy": true, 00:22:00.628 "get_zone_info": false, 00:22:00.628 "zone_management": false, 00:22:00.628 "zone_append": false, 00:22:00.628 "compare": false, 00:22:00.628 "compare_and_write": false, 00:22:00.628 "abort": true, 00:22:00.628 "seek_hole": false, 00:22:00.628 "seek_data": false, 00:22:00.628 "copy": true, 00:22:00.628 "nvme_iov_md": false 00:22:00.628 }, 00:22:00.628 "memory_domains": [ 00:22:00.628 { 00:22:00.628 "dma_device_id": "system", 00:22:00.628 "dma_device_type": 1 00:22:00.628 }, 00:22:00.628 { 00:22:00.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.628 "dma_device_type": 2 00:22:00.628 } 00:22:00.628 ], 00:22:00.628 "driver_specific": {} 00:22:00.628 } 00:22:00.628 ] 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.628 "name": "Existed_Raid", 00:22:00.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.628 "strip_size_kb": 0, 00:22:00.628 "state": "configuring", 00:22:00.628 "raid_level": "raid1", 00:22:00.628 "superblock": false, 00:22:00.628 "num_base_bdevs": 4, 00:22:00.628 "num_base_bdevs_discovered": 3, 00:22:00.628 "num_base_bdevs_operational": 4, 00:22:00.628 "base_bdevs_list": [ 00:22:00.628 { 00:22:00.628 "name": "BaseBdev1", 00:22:00.628 "uuid": "14171d51-319d-4516-9733-bea2688f4d51", 00:22:00.628 "is_configured": true, 00:22:00.628 "data_offset": 0, 00:22:00.628 "data_size": 65536 00:22:00.628 }, 00:22:00.628 { 00:22:00.628 "name": "BaseBdev2", 00:22:00.628 "uuid": "908b2688-3810-44f1-bf37-a95d46ee09b2", 00:22:00.628 "is_configured": true, 00:22:00.628 "data_offset": 0, 00:22:00.628 "data_size": 65536 00:22:00.628 }, 00:22:00.628 { 00:22:00.628 "name": "BaseBdev3", 00:22:00.628 "uuid": "a534ab19-0ce4-4612-9215-84f5073cf52c", 00:22:00.628 "is_configured": true, 00:22:00.628 "data_offset": 0, 00:22:00.628 "data_size": 65536 00:22:00.628 }, 00:22:00.628 { 00:22:00.628 "name": "BaseBdev4", 00:22:00.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.628 "is_configured": false, 00:22:00.628 "data_offset": 0, 00:22:00.628 "data_size": 0 00:22:00.628 } 00:22:00.628 ] 00:22:00.628 }' 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.628 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 [2024-11-08 17:09:37.566250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:00.887 [2024-11-08 17:09:37.566320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:00.887 [2024-11-08 17:09:37.566328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:00.887 [2024-11-08 17:09:37.566620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:00.887 [2024-11-08 17:09:37.566805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:00.887 [2024-11-08 17:09:37.566817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:00.887 [2024-11-08 17:09:37.567090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.887 BaseBdev4 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.887 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.887 [ 00:22:00.887 { 00:22:00.887 "name": "BaseBdev4", 00:22:00.887 "aliases": [ 00:22:00.887 "1cef445a-e559-4886-9d8b-23c9ab84e08e" 00:22:00.887 ], 00:22:00.887 "product_name": "Malloc disk", 00:22:00.887 "block_size": 512, 00:22:00.887 "num_blocks": 65536, 00:22:00.887 "uuid": "1cef445a-e559-4886-9d8b-23c9ab84e08e", 00:22:00.887 "assigned_rate_limits": { 00:22:00.887 "rw_ios_per_sec": 0, 00:22:00.887 "rw_mbytes_per_sec": 0, 00:22:00.887 "r_mbytes_per_sec": 0, 00:22:00.887 "w_mbytes_per_sec": 0 00:22:00.887 }, 00:22:00.887 "claimed": true, 00:22:00.887 "claim_type": "exclusive_write", 00:22:00.887 "zoned": false, 00:22:00.887 "supported_io_types": { 00:22:00.887 "read": true, 00:22:00.887 "write": true, 00:22:00.887 "unmap": true, 00:22:00.887 "flush": true, 00:22:00.887 "reset": true, 00:22:00.887 "nvme_admin": false, 00:22:00.887 "nvme_io": false, 00:22:00.887 "nvme_io_md": false, 00:22:00.887 "write_zeroes": true, 00:22:00.887 "zcopy": true, 00:22:00.887 "get_zone_info": false, 00:22:00.887 "zone_management": false, 00:22:00.887 "zone_append": false, 00:22:00.887 "compare": false, 00:22:00.887 "compare_and_write": false, 00:22:00.888 "abort": true, 00:22:00.888 "seek_hole": false, 00:22:00.888 "seek_data": false, 00:22:00.888 "copy": true, 00:22:00.888 "nvme_iov_md": false 00:22:00.888 }, 00:22:00.888 "memory_domains": [ 00:22:00.888 { 00:22:00.888 "dma_device_id": "system", 00:22:00.888 "dma_device_type": 1 00:22:00.888 }, 00:22:00.888 { 00:22:00.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.888 "dma_device_type": 2 00:22:00.888 } 00:22:00.888 ], 00:22:00.888 "driver_specific": {} 00:22:00.888 } 00:22:00.888 ] 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.888 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.145 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.145 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.145 "name": "Existed_Raid", 00:22:01.145 "uuid": "397b26ad-22b7-4a04-a837-2393a08b57d2", 00:22:01.145 "strip_size_kb": 0, 00:22:01.145 "state": "online", 00:22:01.145 "raid_level": "raid1", 00:22:01.145 "superblock": false, 00:22:01.145 "num_base_bdevs": 4, 00:22:01.145 "num_base_bdevs_discovered": 4, 00:22:01.145 "num_base_bdevs_operational": 4, 00:22:01.145 "base_bdevs_list": [ 00:22:01.145 { 00:22:01.145 "name": "BaseBdev1", 00:22:01.145 "uuid": "14171d51-319d-4516-9733-bea2688f4d51", 00:22:01.145 "is_configured": true, 00:22:01.145 "data_offset": 0, 00:22:01.145 "data_size": 65536 00:22:01.145 }, 00:22:01.145 { 00:22:01.145 "name": "BaseBdev2", 00:22:01.145 "uuid": "908b2688-3810-44f1-bf37-a95d46ee09b2", 00:22:01.145 "is_configured": true, 00:22:01.145 "data_offset": 0, 00:22:01.145 "data_size": 65536 00:22:01.145 }, 00:22:01.145 { 00:22:01.145 "name": "BaseBdev3", 00:22:01.145 "uuid": "a534ab19-0ce4-4612-9215-84f5073cf52c", 00:22:01.145 "is_configured": true, 00:22:01.145 "data_offset": 0, 00:22:01.145 "data_size": 65536 00:22:01.145 }, 00:22:01.145 { 00:22:01.145 "name": "BaseBdev4", 00:22:01.145 "uuid": "1cef445a-e559-4886-9d8b-23c9ab84e08e", 00:22:01.145 "is_configured": true, 00:22:01.145 "data_offset": 0, 00:22:01.145 "data_size": 65536 00:22:01.145 } 00:22:01.145 ] 00:22:01.145 }' 00:22:01.145 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.145 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:01.404 [2024-11-08 17:09:37.930787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:01.404 "name": "Existed_Raid", 00:22:01.404 "aliases": [ 00:22:01.404 "397b26ad-22b7-4a04-a837-2393a08b57d2" 00:22:01.404 ], 00:22:01.404 "product_name": "Raid Volume", 00:22:01.404 "block_size": 512, 00:22:01.404 "num_blocks": 65536, 00:22:01.404 "uuid": "397b26ad-22b7-4a04-a837-2393a08b57d2", 00:22:01.404 "assigned_rate_limits": { 00:22:01.404 "rw_ios_per_sec": 0, 00:22:01.404 "rw_mbytes_per_sec": 0, 00:22:01.404 "r_mbytes_per_sec": 0, 00:22:01.404 "w_mbytes_per_sec": 0 00:22:01.404 }, 00:22:01.404 "claimed": false, 00:22:01.404 "zoned": false, 00:22:01.404 "supported_io_types": { 00:22:01.404 "read": true, 00:22:01.404 "write": true, 00:22:01.404 "unmap": false, 00:22:01.404 "flush": false, 00:22:01.404 "reset": true, 00:22:01.404 "nvme_admin": false, 00:22:01.404 "nvme_io": false, 00:22:01.404 "nvme_io_md": false, 00:22:01.404 "write_zeroes": true, 00:22:01.404 "zcopy": false, 00:22:01.404 "get_zone_info": false, 00:22:01.404 "zone_management": false, 00:22:01.404 "zone_append": false, 00:22:01.404 "compare": false, 00:22:01.404 "compare_and_write": false, 00:22:01.404 "abort": false, 00:22:01.404 "seek_hole": false, 00:22:01.404 "seek_data": false, 00:22:01.404 "copy": false, 00:22:01.404 "nvme_iov_md": false 00:22:01.404 }, 00:22:01.404 "memory_domains": [ 00:22:01.404 { 00:22:01.404 "dma_device_id": "system", 00:22:01.404 "dma_device_type": 1 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.404 "dma_device_type": 2 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "dma_device_id": "system", 00:22:01.404 "dma_device_type": 1 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.404 "dma_device_type": 2 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "dma_device_id": "system", 00:22:01.404 "dma_device_type": 1 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.404 "dma_device_type": 2 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "dma_device_id": "system", 00:22:01.404 "dma_device_type": 1 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.404 "dma_device_type": 2 00:22:01.404 } 00:22:01.404 ], 00:22:01.404 "driver_specific": { 00:22:01.404 "raid": { 00:22:01.404 "uuid": "397b26ad-22b7-4a04-a837-2393a08b57d2", 00:22:01.404 "strip_size_kb": 0, 00:22:01.404 "state": "online", 00:22:01.404 "raid_level": "raid1", 00:22:01.404 "superblock": false, 00:22:01.404 "num_base_bdevs": 4, 00:22:01.404 "num_base_bdevs_discovered": 4, 00:22:01.404 "num_base_bdevs_operational": 4, 00:22:01.404 "base_bdevs_list": [ 00:22:01.404 { 00:22:01.404 "name": "BaseBdev1", 00:22:01.404 "uuid": "14171d51-319d-4516-9733-bea2688f4d51", 00:22:01.404 "is_configured": true, 00:22:01.404 "data_offset": 0, 00:22:01.404 "data_size": 65536 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "name": "BaseBdev2", 00:22:01.404 "uuid": "908b2688-3810-44f1-bf37-a95d46ee09b2", 00:22:01.404 "is_configured": true, 00:22:01.404 "data_offset": 0, 00:22:01.404 "data_size": 65536 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "name": "BaseBdev3", 00:22:01.404 "uuid": "a534ab19-0ce4-4612-9215-84f5073cf52c", 00:22:01.404 "is_configured": true, 00:22:01.404 "data_offset": 0, 00:22:01.404 "data_size": 65536 00:22:01.404 }, 00:22:01.404 { 00:22:01.404 "name": "BaseBdev4", 00:22:01.404 "uuid": "1cef445a-e559-4886-9d8b-23c9ab84e08e", 00:22:01.404 "is_configured": true, 00:22:01.404 "data_offset": 0, 00:22:01.404 "data_size": 65536 00:22:01.404 } 00:22:01.404 ] 00:22:01.404 } 00:22:01.404 } 00:22:01.404 }' 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:01.404 BaseBdev2 00:22:01.404 BaseBdev3 00:22:01.404 BaseBdev4' 00:22:01.404 17:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.404 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.405 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.663 [2024-11-08 17:09:38.174539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.663 "name": "Existed_Raid", 00:22:01.663 "uuid": "397b26ad-22b7-4a04-a837-2393a08b57d2", 00:22:01.663 "strip_size_kb": 0, 00:22:01.663 "state": "online", 00:22:01.663 "raid_level": "raid1", 00:22:01.663 "superblock": false, 00:22:01.663 "num_base_bdevs": 4, 00:22:01.663 "num_base_bdevs_discovered": 3, 00:22:01.663 "num_base_bdevs_operational": 3, 00:22:01.663 "base_bdevs_list": [ 00:22:01.663 { 00:22:01.663 "name": null, 00:22:01.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.663 "is_configured": false, 00:22:01.663 "data_offset": 0, 00:22:01.663 "data_size": 65536 00:22:01.663 }, 00:22:01.663 { 00:22:01.663 "name": "BaseBdev2", 00:22:01.663 "uuid": "908b2688-3810-44f1-bf37-a95d46ee09b2", 00:22:01.663 "is_configured": true, 00:22:01.663 "data_offset": 0, 00:22:01.663 "data_size": 65536 00:22:01.663 }, 00:22:01.663 { 00:22:01.663 "name": "BaseBdev3", 00:22:01.663 "uuid": "a534ab19-0ce4-4612-9215-84f5073cf52c", 00:22:01.663 "is_configured": true, 00:22:01.663 "data_offset": 0, 00:22:01.663 "data_size": 65536 00:22:01.663 }, 00:22:01.663 { 00:22:01.663 "name": "BaseBdev4", 00:22:01.663 "uuid": "1cef445a-e559-4886-9d8b-23c9ab84e08e", 00:22:01.663 "is_configured": true, 00:22:01.663 "data_offset": 0, 00:22:01.663 "data_size": 65536 00:22:01.663 } 00:22:01.663 ] 00:22:01.663 }' 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.663 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.921 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.921 [2024-11-08 17:09:38.600897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.180 [2024-11-08 17:09:38.703451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.180 [2024-11-08 17:09:38.810828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:02.180 [2024-11-08 17:09:38.810951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.180 [2024-11-08 17:09:38.874386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.180 [2024-11-08 17:09:38.874456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.180 [2024-11-08 17:09:38.874468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:02.180 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.181 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.181 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:02.181 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.181 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.440 BaseBdev2 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.440 [ 00:22:02.440 { 00:22:02.440 "name": "BaseBdev2", 00:22:02.440 "aliases": [ 00:22:02.440 "292c1d83-4935-4a03-bd25-b00caa6b3aa5" 00:22:02.440 ], 00:22:02.440 "product_name": "Malloc disk", 00:22:02.440 "block_size": 512, 00:22:02.440 "num_blocks": 65536, 00:22:02.440 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:02.440 "assigned_rate_limits": { 00:22:02.440 "rw_ios_per_sec": 0, 00:22:02.440 "rw_mbytes_per_sec": 0, 00:22:02.440 "r_mbytes_per_sec": 0, 00:22:02.440 "w_mbytes_per_sec": 0 00:22:02.440 }, 00:22:02.440 "claimed": false, 00:22:02.440 "zoned": false, 00:22:02.440 "supported_io_types": { 00:22:02.440 "read": true, 00:22:02.440 "write": true, 00:22:02.440 "unmap": true, 00:22:02.440 "flush": true, 00:22:02.440 "reset": true, 00:22:02.440 "nvme_admin": false, 00:22:02.440 "nvme_io": false, 00:22:02.440 "nvme_io_md": false, 00:22:02.440 "write_zeroes": true, 00:22:02.440 "zcopy": true, 00:22:02.440 "get_zone_info": false, 00:22:02.440 "zone_management": false, 00:22:02.440 "zone_append": false, 00:22:02.440 "compare": false, 00:22:02.440 "compare_and_write": false, 00:22:02.440 "abort": true, 00:22:02.440 "seek_hole": false, 00:22:02.440 "seek_data": false, 00:22:02.440 "copy": true, 00:22:02.440 "nvme_iov_md": false 00:22:02.440 }, 00:22:02.440 "memory_domains": [ 00:22:02.440 { 00:22:02.440 "dma_device_id": "system", 00:22:02.440 "dma_device_type": 1 00:22:02.440 }, 00:22:02.440 { 00:22:02.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.440 "dma_device_type": 2 00:22:02.440 } 00:22:02.440 ], 00:22:02.440 "driver_specific": {} 00:22:02.440 } 00:22:02.440 ] 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.440 17:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.440 BaseBdev3 00:22:02.440 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.440 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:02.440 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:02.440 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:02.440 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.441 [ 00:22:02.441 { 00:22:02.441 "name": "BaseBdev3", 00:22:02.441 "aliases": [ 00:22:02.441 "55e85729-10b6-4189-9074-e0e8b85b4c33" 00:22:02.441 ], 00:22:02.441 "product_name": "Malloc disk", 00:22:02.441 "block_size": 512, 00:22:02.441 "num_blocks": 65536, 00:22:02.441 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:02.441 "assigned_rate_limits": { 00:22:02.441 "rw_ios_per_sec": 0, 00:22:02.441 "rw_mbytes_per_sec": 0, 00:22:02.441 "r_mbytes_per_sec": 0, 00:22:02.441 "w_mbytes_per_sec": 0 00:22:02.441 }, 00:22:02.441 "claimed": false, 00:22:02.441 "zoned": false, 00:22:02.441 "supported_io_types": { 00:22:02.441 "read": true, 00:22:02.441 "write": true, 00:22:02.441 "unmap": true, 00:22:02.441 "flush": true, 00:22:02.441 "reset": true, 00:22:02.441 "nvme_admin": false, 00:22:02.441 "nvme_io": false, 00:22:02.441 "nvme_io_md": false, 00:22:02.441 "write_zeroes": true, 00:22:02.441 "zcopy": true, 00:22:02.441 "get_zone_info": false, 00:22:02.441 "zone_management": false, 00:22:02.441 "zone_append": false, 00:22:02.441 "compare": false, 00:22:02.441 "compare_and_write": false, 00:22:02.441 "abort": true, 00:22:02.441 "seek_hole": false, 00:22:02.441 "seek_data": false, 00:22:02.441 "copy": true, 00:22:02.441 "nvme_iov_md": false 00:22:02.441 }, 00:22:02.441 "memory_domains": [ 00:22:02.441 { 00:22:02.441 "dma_device_id": "system", 00:22:02.441 "dma_device_type": 1 00:22:02.441 }, 00:22:02.441 { 00:22:02.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.441 "dma_device_type": 2 00:22:02.441 } 00:22:02.441 ], 00:22:02.441 "driver_specific": {} 00:22:02.441 } 00:22:02.441 ] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.441 BaseBdev4 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.441 [ 00:22:02.441 { 00:22:02.441 "name": "BaseBdev4", 00:22:02.441 "aliases": [ 00:22:02.441 "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5" 00:22:02.441 ], 00:22:02.441 "product_name": "Malloc disk", 00:22:02.441 "block_size": 512, 00:22:02.441 "num_blocks": 65536, 00:22:02.441 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:02.441 "assigned_rate_limits": { 00:22:02.441 "rw_ios_per_sec": 0, 00:22:02.441 "rw_mbytes_per_sec": 0, 00:22:02.441 "r_mbytes_per_sec": 0, 00:22:02.441 "w_mbytes_per_sec": 0 00:22:02.441 }, 00:22:02.441 "claimed": false, 00:22:02.441 "zoned": false, 00:22:02.441 "supported_io_types": { 00:22:02.441 "read": true, 00:22:02.441 "write": true, 00:22:02.441 "unmap": true, 00:22:02.441 "flush": true, 00:22:02.441 "reset": true, 00:22:02.441 "nvme_admin": false, 00:22:02.441 "nvme_io": false, 00:22:02.441 "nvme_io_md": false, 00:22:02.441 "write_zeroes": true, 00:22:02.441 "zcopy": true, 00:22:02.441 "get_zone_info": false, 00:22:02.441 "zone_management": false, 00:22:02.441 "zone_append": false, 00:22:02.441 "compare": false, 00:22:02.441 "compare_and_write": false, 00:22:02.441 "abort": true, 00:22:02.441 "seek_hole": false, 00:22:02.441 "seek_data": false, 00:22:02.441 "copy": true, 00:22:02.441 "nvme_iov_md": false 00:22:02.441 }, 00:22:02.441 "memory_domains": [ 00:22:02.441 { 00:22:02.441 "dma_device_id": "system", 00:22:02.441 "dma_device_type": 1 00:22:02.441 }, 00:22:02.441 { 00:22:02.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.441 "dma_device_type": 2 00:22:02.441 } 00:22:02.441 ], 00:22:02.441 "driver_specific": {} 00:22:02.441 } 00:22:02.441 ] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.441 [2024-11-08 17:09:39.103444] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.441 [2024-11-08 17:09:39.103607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.441 [2024-11-08 17:09:39.103688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.441 [2024-11-08 17:09:39.105734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.441 [2024-11-08 17:09:39.105883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.441 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.441 "name": "Existed_Raid", 00:22:02.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.441 "strip_size_kb": 0, 00:22:02.441 "state": "configuring", 00:22:02.441 "raid_level": "raid1", 00:22:02.441 "superblock": false, 00:22:02.441 "num_base_bdevs": 4, 00:22:02.441 "num_base_bdevs_discovered": 3, 00:22:02.441 "num_base_bdevs_operational": 4, 00:22:02.441 "base_bdevs_list": [ 00:22:02.441 { 00:22:02.441 "name": "BaseBdev1", 00:22:02.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.441 "is_configured": false, 00:22:02.441 "data_offset": 0, 00:22:02.441 "data_size": 0 00:22:02.441 }, 00:22:02.441 { 00:22:02.441 "name": "BaseBdev2", 00:22:02.441 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:02.441 "is_configured": true, 00:22:02.441 "data_offset": 0, 00:22:02.441 "data_size": 65536 00:22:02.441 }, 00:22:02.442 { 00:22:02.442 "name": "BaseBdev3", 00:22:02.442 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:02.442 "is_configured": true, 00:22:02.442 "data_offset": 0, 00:22:02.442 "data_size": 65536 00:22:02.442 }, 00:22:02.442 { 00:22:02.442 "name": "BaseBdev4", 00:22:02.442 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:02.442 "is_configured": true, 00:22:02.442 "data_offset": 0, 00:22:02.442 "data_size": 65536 00:22:02.442 } 00:22:02.442 ] 00:22:02.442 }' 00:22:02.442 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.442 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.007 [2024-11-08 17:09:39.435526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.007 "name": "Existed_Raid", 00:22:03.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.007 "strip_size_kb": 0, 00:22:03.007 "state": "configuring", 00:22:03.007 "raid_level": "raid1", 00:22:03.007 "superblock": false, 00:22:03.007 "num_base_bdevs": 4, 00:22:03.007 "num_base_bdevs_discovered": 2, 00:22:03.007 "num_base_bdevs_operational": 4, 00:22:03.007 "base_bdevs_list": [ 00:22:03.007 { 00:22:03.007 "name": "BaseBdev1", 00:22:03.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.007 "is_configured": false, 00:22:03.007 "data_offset": 0, 00:22:03.007 "data_size": 0 00:22:03.007 }, 00:22:03.007 { 00:22:03.007 "name": null, 00:22:03.007 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:03.007 "is_configured": false, 00:22:03.007 "data_offset": 0, 00:22:03.007 "data_size": 65536 00:22:03.007 }, 00:22:03.007 { 00:22:03.007 "name": "BaseBdev3", 00:22:03.007 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:03.007 "is_configured": true, 00:22:03.007 "data_offset": 0, 00:22:03.007 "data_size": 65536 00:22:03.007 }, 00:22:03.007 { 00:22:03.007 "name": "BaseBdev4", 00:22:03.007 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:03.007 "is_configured": true, 00:22:03.007 "data_offset": 0, 00:22:03.007 "data_size": 65536 00:22:03.007 } 00:22:03.007 ] 00:22:03.007 }' 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.007 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.265 [2024-11-08 17:09:39.816840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.265 BaseBdev1 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.265 [ 00:22:03.265 { 00:22:03.265 "name": "BaseBdev1", 00:22:03.265 "aliases": [ 00:22:03.265 "dec39106-67c2-4b98-a793-9db8493c8d96" 00:22:03.265 ], 00:22:03.265 "product_name": "Malloc disk", 00:22:03.265 "block_size": 512, 00:22:03.265 "num_blocks": 65536, 00:22:03.265 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:03.265 "assigned_rate_limits": { 00:22:03.265 "rw_ios_per_sec": 0, 00:22:03.265 "rw_mbytes_per_sec": 0, 00:22:03.265 "r_mbytes_per_sec": 0, 00:22:03.265 "w_mbytes_per_sec": 0 00:22:03.265 }, 00:22:03.265 "claimed": true, 00:22:03.265 "claim_type": "exclusive_write", 00:22:03.265 "zoned": false, 00:22:03.265 "supported_io_types": { 00:22:03.265 "read": true, 00:22:03.265 "write": true, 00:22:03.265 "unmap": true, 00:22:03.265 "flush": true, 00:22:03.265 "reset": true, 00:22:03.265 "nvme_admin": false, 00:22:03.265 "nvme_io": false, 00:22:03.265 "nvme_io_md": false, 00:22:03.265 "write_zeroes": true, 00:22:03.265 "zcopy": true, 00:22:03.265 "get_zone_info": false, 00:22:03.265 "zone_management": false, 00:22:03.265 "zone_append": false, 00:22:03.265 "compare": false, 00:22:03.265 "compare_and_write": false, 00:22:03.265 "abort": true, 00:22:03.265 "seek_hole": false, 00:22:03.265 "seek_data": false, 00:22:03.265 "copy": true, 00:22:03.265 "nvme_iov_md": false 00:22:03.265 }, 00:22:03.265 "memory_domains": [ 00:22:03.265 { 00:22:03.265 "dma_device_id": "system", 00:22:03.265 "dma_device_type": 1 00:22:03.265 }, 00:22:03.265 { 00:22:03.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.265 "dma_device_type": 2 00:22:03.265 } 00:22:03.265 ], 00:22:03.265 "driver_specific": {} 00:22:03.265 } 00:22:03.265 ] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.265 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.265 "name": "Existed_Raid", 00:22:03.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.265 "strip_size_kb": 0, 00:22:03.265 "state": "configuring", 00:22:03.265 "raid_level": "raid1", 00:22:03.265 "superblock": false, 00:22:03.265 "num_base_bdevs": 4, 00:22:03.265 "num_base_bdevs_discovered": 3, 00:22:03.265 "num_base_bdevs_operational": 4, 00:22:03.265 "base_bdevs_list": [ 00:22:03.265 { 00:22:03.265 "name": "BaseBdev1", 00:22:03.265 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:03.265 "is_configured": true, 00:22:03.265 "data_offset": 0, 00:22:03.265 "data_size": 65536 00:22:03.266 }, 00:22:03.266 { 00:22:03.266 "name": null, 00:22:03.266 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:03.266 "is_configured": false, 00:22:03.266 "data_offset": 0, 00:22:03.266 "data_size": 65536 00:22:03.266 }, 00:22:03.266 { 00:22:03.266 "name": "BaseBdev3", 00:22:03.266 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:03.266 "is_configured": true, 00:22:03.266 "data_offset": 0, 00:22:03.266 "data_size": 65536 00:22:03.266 }, 00:22:03.266 { 00:22:03.266 "name": "BaseBdev4", 00:22:03.266 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:03.266 "is_configured": true, 00:22:03.266 "data_offset": 0, 00:22:03.266 "data_size": 65536 00:22:03.266 } 00:22:03.266 ] 00:22:03.266 }' 00:22:03.266 17:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.266 17:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.523 [2024-11-08 17:09:40.201062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.523 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.781 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.781 "name": "Existed_Raid", 00:22:03.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.781 "strip_size_kb": 0, 00:22:03.781 "state": "configuring", 00:22:03.781 "raid_level": "raid1", 00:22:03.781 "superblock": false, 00:22:03.781 "num_base_bdevs": 4, 00:22:03.781 "num_base_bdevs_discovered": 2, 00:22:03.781 "num_base_bdevs_operational": 4, 00:22:03.781 "base_bdevs_list": [ 00:22:03.781 { 00:22:03.781 "name": "BaseBdev1", 00:22:03.781 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:03.781 "is_configured": true, 00:22:03.781 "data_offset": 0, 00:22:03.781 "data_size": 65536 00:22:03.781 }, 00:22:03.781 { 00:22:03.781 "name": null, 00:22:03.781 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:03.781 "is_configured": false, 00:22:03.781 "data_offset": 0, 00:22:03.781 "data_size": 65536 00:22:03.781 }, 00:22:03.781 { 00:22:03.781 "name": null, 00:22:03.781 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:03.781 "is_configured": false, 00:22:03.781 "data_offset": 0, 00:22:03.781 "data_size": 65536 00:22:03.781 }, 00:22:03.781 { 00:22:03.781 "name": "BaseBdev4", 00:22:03.781 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:03.781 "is_configured": true, 00:22:03.781 "data_offset": 0, 00:22:03.781 "data_size": 65536 00:22:03.781 } 00:22:03.781 ] 00:22:03.781 }' 00:22:03.781 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.781 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.039 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.039 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.040 [2024-11-08 17:09:40.561144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.040 "name": "Existed_Raid", 00:22:04.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.040 "strip_size_kb": 0, 00:22:04.040 "state": "configuring", 00:22:04.040 "raid_level": "raid1", 00:22:04.040 "superblock": false, 00:22:04.040 "num_base_bdevs": 4, 00:22:04.040 "num_base_bdevs_discovered": 3, 00:22:04.040 "num_base_bdevs_operational": 4, 00:22:04.040 "base_bdevs_list": [ 00:22:04.040 { 00:22:04.040 "name": "BaseBdev1", 00:22:04.040 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:04.040 "is_configured": true, 00:22:04.040 "data_offset": 0, 00:22:04.040 "data_size": 65536 00:22:04.040 }, 00:22:04.040 { 00:22:04.040 "name": null, 00:22:04.040 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:04.040 "is_configured": false, 00:22:04.040 "data_offset": 0, 00:22:04.040 "data_size": 65536 00:22:04.040 }, 00:22:04.040 { 00:22:04.040 "name": "BaseBdev3", 00:22:04.040 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:04.040 "is_configured": true, 00:22:04.040 "data_offset": 0, 00:22:04.040 "data_size": 65536 00:22:04.040 }, 00:22:04.040 { 00:22:04.040 "name": "BaseBdev4", 00:22:04.040 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:04.040 "is_configured": true, 00:22:04.040 "data_offset": 0, 00:22:04.040 "data_size": 65536 00:22:04.040 } 00:22:04.040 ] 00:22:04.040 }' 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.040 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.298 [2024-11-08 17:09:40.929276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.298 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.299 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.299 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.299 17:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.299 17:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.556 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.556 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.556 "name": "Existed_Raid", 00:22:04.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.556 "strip_size_kb": 0, 00:22:04.556 "state": "configuring", 00:22:04.556 "raid_level": "raid1", 00:22:04.556 "superblock": false, 00:22:04.556 "num_base_bdevs": 4, 00:22:04.556 "num_base_bdevs_discovered": 2, 00:22:04.556 "num_base_bdevs_operational": 4, 00:22:04.556 "base_bdevs_list": [ 00:22:04.556 { 00:22:04.556 "name": null, 00:22:04.556 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:04.556 "is_configured": false, 00:22:04.556 "data_offset": 0, 00:22:04.556 "data_size": 65536 00:22:04.556 }, 00:22:04.556 { 00:22:04.556 "name": null, 00:22:04.556 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:04.556 "is_configured": false, 00:22:04.556 "data_offset": 0, 00:22:04.556 "data_size": 65536 00:22:04.556 }, 00:22:04.556 { 00:22:04.556 "name": "BaseBdev3", 00:22:04.556 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:04.557 "is_configured": true, 00:22:04.557 "data_offset": 0, 00:22:04.557 "data_size": 65536 00:22:04.557 }, 00:22:04.557 { 00:22:04.557 "name": "BaseBdev4", 00:22:04.557 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:04.557 "is_configured": true, 00:22:04.557 "data_offset": 0, 00:22:04.557 "data_size": 65536 00:22:04.557 } 00:22:04.557 ] 00:22:04.557 }' 00:22:04.557 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.557 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.814 [2024-11-08 17:09:41.345239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.814 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.814 "name": "Existed_Raid", 00:22:04.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.814 "strip_size_kb": 0, 00:22:04.814 "state": "configuring", 00:22:04.814 "raid_level": "raid1", 00:22:04.814 "superblock": false, 00:22:04.815 "num_base_bdevs": 4, 00:22:04.815 "num_base_bdevs_discovered": 3, 00:22:04.815 "num_base_bdevs_operational": 4, 00:22:04.815 "base_bdevs_list": [ 00:22:04.815 { 00:22:04.815 "name": null, 00:22:04.815 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:04.815 "is_configured": false, 00:22:04.815 "data_offset": 0, 00:22:04.815 "data_size": 65536 00:22:04.815 }, 00:22:04.815 { 00:22:04.815 "name": "BaseBdev2", 00:22:04.815 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:04.815 "is_configured": true, 00:22:04.815 "data_offset": 0, 00:22:04.815 "data_size": 65536 00:22:04.815 }, 00:22:04.815 { 00:22:04.815 "name": "BaseBdev3", 00:22:04.815 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:04.815 "is_configured": true, 00:22:04.815 "data_offset": 0, 00:22:04.815 "data_size": 65536 00:22:04.815 }, 00:22:04.815 { 00:22:04.815 "name": "BaseBdev4", 00:22:04.815 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:04.815 "is_configured": true, 00:22:04.815 "data_offset": 0, 00:22:04.815 "data_size": 65536 00:22:04.815 } 00:22:04.815 ] 00:22:04.815 }' 00:22:04.815 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.815 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.072 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dec39106-67c2-4b98-a793-9db8493c8d96 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.073 [2024-11-08 17:09:41.763051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:05.073 [2024-11-08 17:09:41.763109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:05.073 [2024-11-08 17:09:41.763119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:05.073 [2024-11-08 17:09:41.763404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:05.073 [2024-11-08 17:09:41.763555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:05.073 [2024-11-08 17:09:41.763563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:05.073 [2024-11-08 17:09:41.763847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.073 NewBaseBdev 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.073 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.331 [ 00:22:05.331 { 00:22:05.331 "name": "NewBaseBdev", 00:22:05.331 "aliases": [ 00:22:05.331 "dec39106-67c2-4b98-a793-9db8493c8d96" 00:22:05.331 ], 00:22:05.331 "product_name": "Malloc disk", 00:22:05.331 "block_size": 512, 00:22:05.331 "num_blocks": 65536, 00:22:05.331 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:05.331 "assigned_rate_limits": { 00:22:05.331 "rw_ios_per_sec": 0, 00:22:05.331 "rw_mbytes_per_sec": 0, 00:22:05.331 "r_mbytes_per_sec": 0, 00:22:05.331 "w_mbytes_per_sec": 0 00:22:05.331 }, 00:22:05.331 "claimed": true, 00:22:05.331 "claim_type": "exclusive_write", 00:22:05.331 "zoned": false, 00:22:05.331 "supported_io_types": { 00:22:05.331 "read": true, 00:22:05.331 "write": true, 00:22:05.331 "unmap": true, 00:22:05.331 "flush": true, 00:22:05.331 "reset": true, 00:22:05.331 "nvme_admin": false, 00:22:05.331 "nvme_io": false, 00:22:05.331 "nvme_io_md": false, 00:22:05.331 "write_zeroes": true, 00:22:05.331 "zcopy": true, 00:22:05.331 "get_zone_info": false, 00:22:05.331 "zone_management": false, 00:22:05.331 "zone_append": false, 00:22:05.331 "compare": false, 00:22:05.331 "compare_and_write": false, 00:22:05.331 "abort": true, 00:22:05.331 "seek_hole": false, 00:22:05.331 "seek_data": false, 00:22:05.331 "copy": true, 00:22:05.331 "nvme_iov_md": false 00:22:05.331 }, 00:22:05.331 "memory_domains": [ 00:22:05.331 { 00:22:05.331 "dma_device_id": "system", 00:22:05.331 "dma_device_type": 1 00:22:05.331 }, 00:22:05.331 { 00:22:05.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.331 "dma_device_type": 2 00:22:05.331 } 00:22:05.331 ], 00:22:05.331 "driver_specific": {} 00:22:05.331 } 00:22:05.331 ] 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.331 "name": "Existed_Raid", 00:22:05.331 "uuid": "1bd52c70-613c-422a-bf32-beb09f8d42d0", 00:22:05.331 "strip_size_kb": 0, 00:22:05.331 "state": "online", 00:22:05.331 "raid_level": "raid1", 00:22:05.331 "superblock": false, 00:22:05.331 "num_base_bdevs": 4, 00:22:05.331 "num_base_bdevs_discovered": 4, 00:22:05.331 "num_base_bdevs_operational": 4, 00:22:05.331 "base_bdevs_list": [ 00:22:05.331 { 00:22:05.331 "name": "NewBaseBdev", 00:22:05.331 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:05.331 "is_configured": true, 00:22:05.331 "data_offset": 0, 00:22:05.331 "data_size": 65536 00:22:05.331 }, 00:22:05.331 { 00:22:05.331 "name": "BaseBdev2", 00:22:05.331 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:05.331 "is_configured": true, 00:22:05.331 "data_offset": 0, 00:22:05.331 "data_size": 65536 00:22:05.331 }, 00:22:05.331 { 00:22:05.331 "name": "BaseBdev3", 00:22:05.331 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:05.331 "is_configured": true, 00:22:05.331 "data_offset": 0, 00:22:05.331 "data_size": 65536 00:22:05.331 }, 00:22:05.331 { 00:22:05.331 "name": "BaseBdev4", 00:22:05.331 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:05.331 "is_configured": true, 00:22:05.331 "data_offset": 0, 00:22:05.331 "data_size": 65536 00:22:05.331 } 00:22:05.331 ] 00:22:05.331 }' 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.331 17:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:05.589 [2024-11-08 17:09:42.123574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.589 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.589 "name": "Existed_Raid", 00:22:05.589 "aliases": [ 00:22:05.589 "1bd52c70-613c-422a-bf32-beb09f8d42d0" 00:22:05.589 ], 00:22:05.589 "product_name": "Raid Volume", 00:22:05.589 "block_size": 512, 00:22:05.589 "num_blocks": 65536, 00:22:05.589 "uuid": "1bd52c70-613c-422a-bf32-beb09f8d42d0", 00:22:05.589 "assigned_rate_limits": { 00:22:05.589 "rw_ios_per_sec": 0, 00:22:05.590 "rw_mbytes_per_sec": 0, 00:22:05.590 "r_mbytes_per_sec": 0, 00:22:05.590 "w_mbytes_per_sec": 0 00:22:05.590 }, 00:22:05.590 "claimed": false, 00:22:05.590 "zoned": false, 00:22:05.590 "supported_io_types": { 00:22:05.590 "read": true, 00:22:05.590 "write": true, 00:22:05.590 "unmap": false, 00:22:05.590 "flush": false, 00:22:05.590 "reset": true, 00:22:05.590 "nvme_admin": false, 00:22:05.590 "nvme_io": false, 00:22:05.590 "nvme_io_md": false, 00:22:05.590 "write_zeroes": true, 00:22:05.590 "zcopy": false, 00:22:05.590 "get_zone_info": false, 00:22:05.590 "zone_management": false, 00:22:05.590 "zone_append": false, 00:22:05.590 "compare": false, 00:22:05.590 "compare_and_write": false, 00:22:05.590 "abort": false, 00:22:05.590 "seek_hole": false, 00:22:05.590 "seek_data": false, 00:22:05.590 "copy": false, 00:22:05.590 "nvme_iov_md": false 00:22:05.590 }, 00:22:05.590 "memory_domains": [ 00:22:05.590 { 00:22:05.590 "dma_device_id": "system", 00:22:05.590 "dma_device_type": 1 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.590 "dma_device_type": 2 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "dma_device_id": "system", 00:22:05.590 "dma_device_type": 1 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.590 "dma_device_type": 2 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "dma_device_id": "system", 00:22:05.590 "dma_device_type": 1 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.590 "dma_device_type": 2 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "dma_device_id": "system", 00:22:05.590 "dma_device_type": 1 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.590 "dma_device_type": 2 00:22:05.590 } 00:22:05.590 ], 00:22:05.590 "driver_specific": { 00:22:05.590 "raid": { 00:22:05.590 "uuid": "1bd52c70-613c-422a-bf32-beb09f8d42d0", 00:22:05.590 "strip_size_kb": 0, 00:22:05.590 "state": "online", 00:22:05.590 "raid_level": "raid1", 00:22:05.590 "superblock": false, 00:22:05.590 "num_base_bdevs": 4, 00:22:05.590 "num_base_bdevs_discovered": 4, 00:22:05.590 "num_base_bdevs_operational": 4, 00:22:05.590 "base_bdevs_list": [ 00:22:05.590 { 00:22:05.590 "name": "NewBaseBdev", 00:22:05.590 "uuid": "dec39106-67c2-4b98-a793-9db8493c8d96", 00:22:05.590 "is_configured": true, 00:22:05.590 "data_offset": 0, 00:22:05.590 "data_size": 65536 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "name": "BaseBdev2", 00:22:05.590 "uuid": "292c1d83-4935-4a03-bd25-b00caa6b3aa5", 00:22:05.590 "is_configured": true, 00:22:05.590 "data_offset": 0, 00:22:05.590 "data_size": 65536 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "name": "BaseBdev3", 00:22:05.590 "uuid": "55e85729-10b6-4189-9074-e0e8b85b4c33", 00:22:05.590 "is_configured": true, 00:22:05.590 "data_offset": 0, 00:22:05.590 "data_size": 65536 00:22:05.590 }, 00:22:05.590 { 00:22:05.590 "name": "BaseBdev4", 00:22:05.590 "uuid": "0ec3013a-2ef1-416d-b6f1-98e015d6a4e5", 00:22:05.590 "is_configured": true, 00:22:05.590 "data_offset": 0, 00:22:05.590 "data_size": 65536 00:22:05.590 } 00:22:05.590 ] 00:22:05.590 } 00:22:05.590 } 00:22:05.590 }' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:05.590 BaseBdev2 00:22:05.590 BaseBdev3 00:22:05.590 BaseBdev4' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.590 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.849 [2024-11-08 17:09:42.351250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:05.849 [2024-11-08 17:09:42.351278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.849 [2024-11-08 17:09:42.351367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.849 [2024-11-08 17:09:42.351675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.849 [2024-11-08 17:09:42.351689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71659 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71659 ']' 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71659 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71659 00:22:05.849 killing process with pid 71659 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71659' 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71659 00:22:05.849 [2024-11-08 17:09:42.383733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.849 17:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71659 00:22:06.109 [2024-11-08 17:09:42.638988] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:07.044 ************************************ 00:22:07.044 END TEST raid_state_function_test 00:22:07.044 ************************************ 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:07.044 00:22:07.044 real 0m8.648s 00:22:07.044 user 0m13.613s 00:22:07.044 sys 0m1.511s 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 17:09:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:07.044 17:09:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:07.044 17:09:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:07.044 17:09:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.044 ************************************ 00:22:07.044 START TEST raid_state_function_test_sb 00:22:07.044 ************************************ 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:07.044 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:07.045 Process raid pid: 72297 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72297 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72297' 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72297 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72297 ']' 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:07.045 17:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.045 [2024-11-08 17:09:43.543682] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:22:07.045 [2024-11-08 17:09:43.544036] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.045 [2024-11-08 17:09:43.705493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.304 [2024-11-08 17:09:43.825425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.304 [2024-11-08 17:09:43.976022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.304 [2024-11-08 17:09:43.976065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.869 [2024-11-08 17:09:44.401182] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:07.869 [2024-11-08 17:09:44.401238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:07.869 [2024-11-08 17:09:44.401249] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.869 [2024-11-08 17:09:44.401259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.869 [2024-11-08 17:09:44.401265] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:07.869 [2024-11-08 17:09:44.401274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:07.869 [2024-11-08 17:09:44.401285] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:07.869 [2024-11-08 17:09:44.401293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.869 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.869 "name": "Existed_Raid", 00:22:07.869 "uuid": "c0518857-82ce-47ff-b960-da1da2aca3e6", 00:22:07.869 "strip_size_kb": 0, 00:22:07.869 "state": "configuring", 00:22:07.869 "raid_level": "raid1", 00:22:07.869 "superblock": true, 00:22:07.869 "num_base_bdevs": 4, 00:22:07.869 "num_base_bdevs_discovered": 0, 00:22:07.869 "num_base_bdevs_operational": 4, 00:22:07.869 "base_bdevs_list": [ 00:22:07.869 { 00:22:07.869 "name": "BaseBdev1", 00:22:07.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.869 "is_configured": false, 00:22:07.869 "data_offset": 0, 00:22:07.869 "data_size": 0 00:22:07.869 }, 00:22:07.869 { 00:22:07.869 "name": "BaseBdev2", 00:22:07.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.869 "is_configured": false, 00:22:07.869 "data_offset": 0, 00:22:07.869 "data_size": 0 00:22:07.869 }, 00:22:07.869 { 00:22:07.869 "name": "BaseBdev3", 00:22:07.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.869 "is_configured": false, 00:22:07.869 "data_offset": 0, 00:22:07.869 "data_size": 0 00:22:07.869 }, 00:22:07.869 { 00:22:07.869 "name": "BaseBdev4", 00:22:07.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.869 "is_configured": false, 00:22:07.869 "data_offset": 0, 00:22:07.870 "data_size": 0 00:22:07.870 } 00:22:07.870 ] 00:22:07.870 }' 00:22:07.870 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.870 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.151 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:08.151 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.151 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.151 [2024-11-08 17:09:44.713205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.151 [2024-11-08 17:09:44.713246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:08.151 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.151 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:08.151 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.151 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.151 [2024-11-08 17:09:44.721212] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.151 [2024-11-08 17:09:44.721258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.151 [2024-11-08 17:09:44.721267] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.151 [2024-11-08 17:09:44.721276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.151 [2024-11-08 17:09:44.721282] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:08.151 [2024-11-08 17:09:44.721291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:08.151 [2024-11-08 17:09:44.721297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:08.151 [2024-11-08 17:09:44.721306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 [2024-11-08 17:09:44.756114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.152 BaseBdev1 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 [ 00:22:08.152 { 00:22:08.152 "name": "BaseBdev1", 00:22:08.152 "aliases": [ 00:22:08.152 "b6ffaa76-c7c3-4152-97fd-6ece85859b84" 00:22:08.152 ], 00:22:08.152 "product_name": "Malloc disk", 00:22:08.152 "block_size": 512, 00:22:08.152 "num_blocks": 65536, 00:22:08.152 "uuid": "b6ffaa76-c7c3-4152-97fd-6ece85859b84", 00:22:08.152 "assigned_rate_limits": { 00:22:08.152 "rw_ios_per_sec": 0, 00:22:08.152 "rw_mbytes_per_sec": 0, 00:22:08.152 "r_mbytes_per_sec": 0, 00:22:08.152 "w_mbytes_per_sec": 0 00:22:08.152 }, 00:22:08.152 "claimed": true, 00:22:08.152 "claim_type": "exclusive_write", 00:22:08.152 "zoned": false, 00:22:08.152 "supported_io_types": { 00:22:08.152 "read": true, 00:22:08.152 "write": true, 00:22:08.152 "unmap": true, 00:22:08.152 "flush": true, 00:22:08.152 "reset": true, 00:22:08.152 "nvme_admin": false, 00:22:08.152 "nvme_io": false, 00:22:08.152 "nvme_io_md": false, 00:22:08.152 "write_zeroes": true, 00:22:08.152 "zcopy": true, 00:22:08.152 "get_zone_info": false, 00:22:08.152 "zone_management": false, 00:22:08.152 "zone_append": false, 00:22:08.152 "compare": false, 00:22:08.152 "compare_and_write": false, 00:22:08.152 "abort": true, 00:22:08.152 "seek_hole": false, 00:22:08.152 "seek_data": false, 00:22:08.152 "copy": true, 00:22:08.152 "nvme_iov_md": false 00:22:08.152 }, 00:22:08.152 "memory_domains": [ 00:22:08.152 { 00:22:08.152 "dma_device_id": "system", 00:22:08.152 "dma_device_type": 1 00:22:08.152 }, 00:22:08.152 { 00:22:08.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.152 "dma_device_type": 2 00:22:08.152 } 00:22:08.152 ], 00:22:08.152 "driver_specific": {} 00:22:08.152 } 00:22:08.152 ] 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.152 "name": "Existed_Raid", 00:22:08.152 "uuid": "cc241c57-9cf1-4301-b16c-b0e2059aa965", 00:22:08.152 "strip_size_kb": 0, 00:22:08.152 "state": "configuring", 00:22:08.152 "raid_level": "raid1", 00:22:08.152 "superblock": true, 00:22:08.152 "num_base_bdevs": 4, 00:22:08.152 "num_base_bdevs_discovered": 1, 00:22:08.152 "num_base_bdevs_operational": 4, 00:22:08.152 "base_bdevs_list": [ 00:22:08.152 { 00:22:08.152 "name": "BaseBdev1", 00:22:08.152 "uuid": "b6ffaa76-c7c3-4152-97fd-6ece85859b84", 00:22:08.152 "is_configured": true, 00:22:08.152 "data_offset": 2048, 00:22:08.152 "data_size": 63488 00:22:08.152 }, 00:22:08.152 { 00:22:08.152 "name": "BaseBdev2", 00:22:08.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.152 "is_configured": false, 00:22:08.152 "data_offset": 0, 00:22:08.152 "data_size": 0 00:22:08.152 }, 00:22:08.152 { 00:22:08.152 "name": "BaseBdev3", 00:22:08.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.152 "is_configured": false, 00:22:08.152 "data_offset": 0, 00:22:08.152 "data_size": 0 00:22:08.152 }, 00:22:08.152 { 00:22:08.152 "name": "BaseBdev4", 00:22:08.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.152 "is_configured": false, 00:22:08.152 "data_offset": 0, 00:22:08.152 "data_size": 0 00:22:08.152 } 00:22:08.152 ] 00:22:08.152 }' 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.152 17:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.448 [2024-11-08 17:09:45.096246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.448 [2024-11-08 17:09:45.096404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.448 [2024-11-08 17:09:45.104298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.448 [2024-11-08 17:09:45.106384] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.448 [2024-11-08 17:09:45.106427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.448 [2024-11-08 17:09:45.106438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:08.448 [2024-11-08 17:09:45.106450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:08.448 [2024-11-08 17:09:45.106458] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:08.448 [2024-11-08 17:09:45.106468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.448 "name": "Existed_Raid", 00:22:08.448 "uuid": "33b4f5d3-1366-48d9-b438-d48078240826", 00:22:08.448 "strip_size_kb": 0, 00:22:08.448 "state": "configuring", 00:22:08.448 "raid_level": "raid1", 00:22:08.448 "superblock": true, 00:22:08.448 "num_base_bdevs": 4, 00:22:08.448 "num_base_bdevs_discovered": 1, 00:22:08.448 "num_base_bdevs_operational": 4, 00:22:08.448 "base_bdevs_list": [ 00:22:08.448 { 00:22:08.448 "name": "BaseBdev1", 00:22:08.448 "uuid": "b6ffaa76-c7c3-4152-97fd-6ece85859b84", 00:22:08.448 "is_configured": true, 00:22:08.448 "data_offset": 2048, 00:22:08.448 "data_size": 63488 00:22:08.448 }, 00:22:08.448 { 00:22:08.448 "name": "BaseBdev2", 00:22:08.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.448 "is_configured": false, 00:22:08.448 "data_offset": 0, 00:22:08.448 "data_size": 0 00:22:08.448 }, 00:22:08.448 { 00:22:08.448 "name": "BaseBdev3", 00:22:08.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.448 "is_configured": false, 00:22:08.448 "data_offset": 0, 00:22:08.448 "data_size": 0 00:22:08.448 }, 00:22:08.448 { 00:22:08.448 "name": "BaseBdev4", 00:22:08.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.448 "is_configured": false, 00:22:08.448 "data_offset": 0, 00:22:08.448 "data_size": 0 00:22:08.448 } 00:22:08.448 ] 00:22:08.448 }' 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.448 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.016 [2024-11-08 17:09:45.461213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.016 BaseBdev2 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.016 [ 00:22:09.016 { 00:22:09.016 "name": "BaseBdev2", 00:22:09.016 "aliases": [ 00:22:09.016 "e14855cb-d330-4279-908f-ec5aec9a0f20" 00:22:09.016 ], 00:22:09.016 "product_name": "Malloc disk", 00:22:09.016 "block_size": 512, 00:22:09.016 "num_blocks": 65536, 00:22:09.016 "uuid": "e14855cb-d330-4279-908f-ec5aec9a0f20", 00:22:09.016 "assigned_rate_limits": { 00:22:09.016 "rw_ios_per_sec": 0, 00:22:09.016 "rw_mbytes_per_sec": 0, 00:22:09.016 "r_mbytes_per_sec": 0, 00:22:09.016 "w_mbytes_per_sec": 0 00:22:09.016 }, 00:22:09.016 "claimed": true, 00:22:09.016 "claim_type": "exclusive_write", 00:22:09.016 "zoned": false, 00:22:09.016 "supported_io_types": { 00:22:09.016 "read": true, 00:22:09.016 "write": true, 00:22:09.016 "unmap": true, 00:22:09.016 "flush": true, 00:22:09.016 "reset": true, 00:22:09.016 "nvme_admin": false, 00:22:09.016 "nvme_io": false, 00:22:09.016 "nvme_io_md": false, 00:22:09.016 "write_zeroes": true, 00:22:09.016 "zcopy": true, 00:22:09.016 "get_zone_info": false, 00:22:09.016 "zone_management": false, 00:22:09.016 "zone_append": false, 00:22:09.016 "compare": false, 00:22:09.016 "compare_and_write": false, 00:22:09.016 "abort": true, 00:22:09.016 "seek_hole": false, 00:22:09.016 "seek_data": false, 00:22:09.016 "copy": true, 00:22:09.016 "nvme_iov_md": false 00:22:09.016 }, 00:22:09.016 "memory_domains": [ 00:22:09.016 { 00:22:09.016 "dma_device_id": "system", 00:22:09.016 "dma_device_type": 1 00:22:09.016 }, 00:22:09.016 { 00:22:09.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.016 "dma_device_type": 2 00:22:09.016 } 00:22:09.016 ], 00:22:09.016 "driver_specific": {} 00:22:09.016 } 00:22:09.016 ] 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.016 "name": "Existed_Raid", 00:22:09.016 "uuid": "33b4f5d3-1366-48d9-b438-d48078240826", 00:22:09.016 "strip_size_kb": 0, 00:22:09.016 "state": "configuring", 00:22:09.016 "raid_level": "raid1", 00:22:09.016 "superblock": true, 00:22:09.016 "num_base_bdevs": 4, 00:22:09.016 "num_base_bdevs_discovered": 2, 00:22:09.016 "num_base_bdevs_operational": 4, 00:22:09.016 "base_bdevs_list": [ 00:22:09.016 { 00:22:09.016 "name": "BaseBdev1", 00:22:09.016 "uuid": "b6ffaa76-c7c3-4152-97fd-6ece85859b84", 00:22:09.016 "is_configured": true, 00:22:09.016 "data_offset": 2048, 00:22:09.016 "data_size": 63488 00:22:09.016 }, 00:22:09.016 { 00:22:09.016 "name": "BaseBdev2", 00:22:09.016 "uuid": "e14855cb-d330-4279-908f-ec5aec9a0f20", 00:22:09.016 "is_configured": true, 00:22:09.016 "data_offset": 2048, 00:22:09.016 "data_size": 63488 00:22:09.016 }, 00:22:09.016 { 00:22:09.016 "name": "BaseBdev3", 00:22:09.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.016 "is_configured": false, 00:22:09.016 "data_offset": 0, 00:22:09.016 "data_size": 0 00:22:09.016 }, 00:22:09.016 { 00:22:09.016 "name": "BaseBdev4", 00:22:09.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.016 "is_configured": false, 00:22:09.016 "data_offset": 0, 00:22:09.016 "data_size": 0 00:22:09.016 } 00:22:09.016 ] 00:22:09.016 }' 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.016 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.275 [2024-11-08 17:09:45.834635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.275 BaseBdev3 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.275 [ 00:22:09.275 { 00:22:09.275 "name": "BaseBdev3", 00:22:09.275 "aliases": [ 00:22:09.275 "d62e6313-cc02-40aa-92c5-6b4ce27a0c0a" 00:22:09.275 ], 00:22:09.275 "product_name": "Malloc disk", 00:22:09.275 "block_size": 512, 00:22:09.275 "num_blocks": 65536, 00:22:09.275 "uuid": "d62e6313-cc02-40aa-92c5-6b4ce27a0c0a", 00:22:09.275 "assigned_rate_limits": { 00:22:09.275 "rw_ios_per_sec": 0, 00:22:09.275 "rw_mbytes_per_sec": 0, 00:22:09.275 "r_mbytes_per_sec": 0, 00:22:09.275 "w_mbytes_per_sec": 0 00:22:09.275 }, 00:22:09.275 "claimed": true, 00:22:09.275 "claim_type": "exclusive_write", 00:22:09.275 "zoned": false, 00:22:09.275 "supported_io_types": { 00:22:09.275 "read": true, 00:22:09.275 "write": true, 00:22:09.275 "unmap": true, 00:22:09.275 "flush": true, 00:22:09.275 "reset": true, 00:22:09.275 "nvme_admin": false, 00:22:09.275 "nvme_io": false, 00:22:09.275 "nvme_io_md": false, 00:22:09.275 "write_zeroes": true, 00:22:09.275 "zcopy": true, 00:22:09.275 "get_zone_info": false, 00:22:09.275 "zone_management": false, 00:22:09.275 "zone_append": false, 00:22:09.275 "compare": false, 00:22:09.275 "compare_and_write": false, 00:22:09.275 "abort": true, 00:22:09.275 "seek_hole": false, 00:22:09.275 "seek_data": false, 00:22:09.275 "copy": true, 00:22:09.275 "nvme_iov_md": false 00:22:09.275 }, 00:22:09.275 "memory_domains": [ 00:22:09.275 { 00:22:09.275 "dma_device_id": "system", 00:22:09.275 "dma_device_type": 1 00:22:09.275 }, 00:22:09.275 { 00:22:09.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.275 "dma_device_type": 2 00:22:09.275 } 00:22:09.275 ], 00:22:09.275 "driver_specific": {} 00:22:09.275 } 00:22:09.275 ] 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.275 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.275 "name": "Existed_Raid", 00:22:09.275 "uuid": "33b4f5d3-1366-48d9-b438-d48078240826", 00:22:09.275 "strip_size_kb": 0, 00:22:09.275 "state": "configuring", 00:22:09.275 "raid_level": "raid1", 00:22:09.275 "superblock": true, 00:22:09.275 "num_base_bdevs": 4, 00:22:09.276 "num_base_bdevs_discovered": 3, 00:22:09.276 "num_base_bdevs_operational": 4, 00:22:09.276 "base_bdevs_list": [ 00:22:09.276 { 00:22:09.276 "name": "BaseBdev1", 00:22:09.276 "uuid": "b6ffaa76-c7c3-4152-97fd-6ece85859b84", 00:22:09.276 "is_configured": true, 00:22:09.276 "data_offset": 2048, 00:22:09.276 "data_size": 63488 00:22:09.276 }, 00:22:09.276 { 00:22:09.276 "name": "BaseBdev2", 00:22:09.276 "uuid": "e14855cb-d330-4279-908f-ec5aec9a0f20", 00:22:09.276 "is_configured": true, 00:22:09.276 "data_offset": 2048, 00:22:09.276 "data_size": 63488 00:22:09.276 }, 00:22:09.276 { 00:22:09.276 "name": "BaseBdev3", 00:22:09.276 "uuid": "d62e6313-cc02-40aa-92c5-6b4ce27a0c0a", 00:22:09.276 "is_configured": true, 00:22:09.276 "data_offset": 2048, 00:22:09.276 "data_size": 63488 00:22:09.276 }, 00:22:09.276 { 00:22:09.276 "name": "BaseBdev4", 00:22:09.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.276 "is_configured": false, 00:22:09.276 "data_offset": 0, 00:22:09.276 "data_size": 0 00:22:09.276 } 00:22:09.276 ] 00:22:09.276 }' 00:22:09.276 17:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.276 17:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.533 [2024-11-08 17:09:46.239401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.533 [2024-11-08 17:09:46.239849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:09.533 [2024-11-08 17:09:46.239869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:09.533 [2024-11-08 17:09:46.240165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:09.533 BaseBdev4 00:22:09.533 [2024-11-08 17:09:46.240316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:09.533 [2024-11-08 17:09:46.240329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:09.533 [2024-11-08 17:09:46.240470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.533 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.792 [ 00:22:09.792 { 00:22:09.792 "name": "BaseBdev4", 00:22:09.792 "aliases": [ 00:22:09.792 "5ff5a61e-eaa1-4b06-a251-f1210f1966c7" 00:22:09.792 ], 00:22:09.792 "product_name": "Malloc disk", 00:22:09.792 "block_size": 512, 00:22:09.792 "num_blocks": 65536, 00:22:09.792 "uuid": "5ff5a61e-eaa1-4b06-a251-f1210f1966c7", 00:22:09.792 "assigned_rate_limits": { 00:22:09.792 "rw_ios_per_sec": 0, 00:22:09.792 "rw_mbytes_per_sec": 0, 00:22:09.792 "r_mbytes_per_sec": 0, 00:22:09.792 "w_mbytes_per_sec": 0 00:22:09.792 }, 00:22:09.792 "claimed": true, 00:22:09.792 "claim_type": "exclusive_write", 00:22:09.792 "zoned": false, 00:22:09.792 "supported_io_types": { 00:22:09.792 "read": true, 00:22:09.792 "write": true, 00:22:09.792 "unmap": true, 00:22:09.792 "flush": true, 00:22:09.792 "reset": true, 00:22:09.792 "nvme_admin": false, 00:22:09.792 "nvme_io": false, 00:22:09.792 "nvme_io_md": false, 00:22:09.792 "write_zeroes": true, 00:22:09.792 "zcopy": true, 00:22:09.792 "get_zone_info": false, 00:22:09.792 "zone_management": false, 00:22:09.792 "zone_append": false, 00:22:09.792 "compare": false, 00:22:09.792 "compare_and_write": false, 00:22:09.792 "abort": true, 00:22:09.792 "seek_hole": false, 00:22:09.792 "seek_data": false, 00:22:09.792 "copy": true, 00:22:09.792 "nvme_iov_md": false 00:22:09.792 }, 00:22:09.792 "memory_domains": [ 00:22:09.792 { 00:22:09.792 "dma_device_id": "system", 00:22:09.792 "dma_device_type": 1 00:22:09.792 }, 00:22:09.792 { 00:22:09.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.792 "dma_device_type": 2 00:22:09.792 } 00:22:09.792 ], 00:22:09.792 "driver_specific": {} 00:22:09.792 } 00:22:09.792 ] 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.792 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.793 "name": "Existed_Raid", 00:22:09.793 "uuid": "33b4f5d3-1366-48d9-b438-d48078240826", 00:22:09.793 "strip_size_kb": 0, 00:22:09.793 "state": "online", 00:22:09.793 "raid_level": "raid1", 00:22:09.793 "superblock": true, 00:22:09.793 "num_base_bdevs": 4, 00:22:09.793 "num_base_bdevs_discovered": 4, 00:22:09.793 "num_base_bdevs_operational": 4, 00:22:09.793 "base_bdevs_list": [ 00:22:09.793 { 00:22:09.793 "name": "BaseBdev1", 00:22:09.793 "uuid": "b6ffaa76-c7c3-4152-97fd-6ece85859b84", 00:22:09.793 "is_configured": true, 00:22:09.793 "data_offset": 2048, 00:22:09.793 "data_size": 63488 00:22:09.793 }, 00:22:09.793 { 00:22:09.793 "name": "BaseBdev2", 00:22:09.793 "uuid": "e14855cb-d330-4279-908f-ec5aec9a0f20", 00:22:09.793 "is_configured": true, 00:22:09.793 "data_offset": 2048, 00:22:09.793 "data_size": 63488 00:22:09.793 }, 00:22:09.793 { 00:22:09.793 "name": "BaseBdev3", 00:22:09.793 "uuid": "d62e6313-cc02-40aa-92c5-6b4ce27a0c0a", 00:22:09.793 "is_configured": true, 00:22:09.793 "data_offset": 2048, 00:22:09.793 "data_size": 63488 00:22:09.793 }, 00:22:09.793 { 00:22:09.793 "name": "BaseBdev4", 00:22:09.793 "uuid": "5ff5a61e-eaa1-4b06-a251-f1210f1966c7", 00:22:09.793 "is_configured": true, 00:22:09.793 "data_offset": 2048, 00:22:09.793 "data_size": 63488 00:22:09.793 } 00:22:09.793 ] 00:22:09.793 }' 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.793 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.055 [2024-11-08 17:09:46.615952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.055 "name": "Existed_Raid", 00:22:10.055 "aliases": [ 00:22:10.055 "33b4f5d3-1366-48d9-b438-d48078240826" 00:22:10.055 ], 00:22:10.055 "product_name": "Raid Volume", 00:22:10.055 "block_size": 512, 00:22:10.055 "num_blocks": 63488, 00:22:10.055 "uuid": "33b4f5d3-1366-48d9-b438-d48078240826", 00:22:10.055 "assigned_rate_limits": { 00:22:10.055 "rw_ios_per_sec": 0, 00:22:10.055 "rw_mbytes_per_sec": 0, 00:22:10.055 "r_mbytes_per_sec": 0, 00:22:10.055 "w_mbytes_per_sec": 0 00:22:10.055 }, 00:22:10.055 "claimed": false, 00:22:10.055 "zoned": false, 00:22:10.055 "supported_io_types": { 00:22:10.055 "read": true, 00:22:10.055 "write": true, 00:22:10.055 "unmap": false, 00:22:10.055 "flush": false, 00:22:10.055 "reset": true, 00:22:10.055 "nvme_admin": false, 00:22:10.055 "nvme_io": false, 00:22:10.055 "nvme_io_md": false, 00:22:10.055 "write_zeroes": true, 00:22:10.055 "zcopy": false, 00:22:10.055 "get_zone_info": false, 00:22:10.055 "zone_management": false, 00:22:10.055 "zone_append": false, 00:22:10.055 "compare": false, 00:22:10.055 "compare_and_write": false, 00:22:10.055 "abort": false, 00:22:10.055 "seek_hole": false, 00:22:10.055 "seek_data": false, 00:22:10.055 "copy": false, 00:22:10.055 "nvme_iov_md": false 00:22:10.055 }, 00:22:10.055 "memory_domains": [ 00:22:10.055 { 00:22:10.055 "dma_device_id": "system", 00:22:10.055 "dma_device_type": 1 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.055 "dma_device_type": 2 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "dma_device_id": "system", 00:22:10.055 "dma_device_type": 1 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.055 "dma_device_type": 2 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "dma_device_id": "system", 00:22:10.055 "dma_device_type": 1 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.055 "dma_device_type": 2 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "dma_device_id": "system", 00:22:10.055 "dma_device_type": 1 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.055 "dma_device_type": 2 00:22:10.055 } 00:22:10.055 ], 00:22:10.055 "driver_specific": { 00:22:10.055 "raid": { 00:22:10.055 "uuid": "33b4f5d3-1366-48d9-b438-d48078240826", 00:22:10.055 "strip_size_kb": 0, 00:22:10.055 "state": "online", 00:22:10.055 "raid_level": "raid1", 00:22:10.055 "superblock": true, 00:22:10.055 "num_base_bdevs": 4, 00:22:10.055 "num_base_bdevs_discovered": 4, 00:22:10.055 "num_base_bdevs_operational": 4, 00:22:10.055 "base_bdevs_list": [ 00:22:10.055 { 00:22:10.055 "name": "BaseBdev1", 00:22:10.055 "uuid": "b6ffaa76-c7c3-4152-97fd-6ece85859b84", 00:22:10.055 "is_configured": true, 00:22:10.055 "data_offset": 2048, 00:22:10.055 "data_size": 63488 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "name": "BaseBdev2", 00:22:10.055 "uuid": "e14855cb-d330-4279-908f-ec5aec9a0f20", 00:22:10.055 "is_configured": true, 00:22:10.055 "data_offset": 2048, 00:22:10.055 "data_size": 63488 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "name": "BaseBdev3", 00:22:10.055 "uuid": "d62e6313-cc02-40aa-92c5-6b4ce27a0c0a", 00:22:10.055 "is_configured": true, 00:22:10.055 "data_offset": 2048, 00:22:10.055 "data_size": 63488 00:22:10.055 }, 00:22:10.055 { 00:22:10.055 "name": "BaseBdev4", 00:22:10.055 "uuid": "5ff5a61e-eaa1-4b06-a251-f1210f1966c7", 00:22:10.055 "is_configured": true, 00:22:10.055 "data_offset": 2048, 00:22:10.055 "data_size": 63488 00:22:10.055 } 00:22:10.055 ] 00:22:10.055 } 00:22:10.055 } 00:22:10.055 }' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:10.055 BaseBdev2 00:22:10.055 BaseBdev3 00:22:10.055 BaseBdev4' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.055 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.311 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:10.311 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.311 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.311 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.311 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.311 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.311 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.312 [2024-11-08 17:09:46.843653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.312 "name": "Existed_Raid", 00:22:10.312 "uuid": "33b4f5d3-1366-48d9-b438-d48078240826", 00:22:10.312 "strip_size_kb": 0, 00:22:10.312 "state": "online", 00:22:10.312 "raid_level": "raid1", 00:22:10.312 "superblock": true, 00:22:10.312 "num_base_bdevs": 4, 00:22:10.312 "num_base_bdevs_discovered": 3, 00:22:10.312 "num_base_bdevs_operational": 3, 00:22:10.312 "base_bdevs_list": [ 00:22:10.312 { 00:22:10.312 "name": null, 00:22:10.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.312 "is_configured": false, 00:22:10.312 "data_offset": 0, 00:22:10.312 "data_size": 63488 00:22:10.312 }, 00:22:10.312 { 00:22:10.312 "name": "BaseBdev2", 00:22:10.312 "uuid": "e14855cb-d330-4279-908f-ec5aec9a0f20", 00:22:10.312 "is_configured": true, 00:22:10.312 "data_offset": 2048, 00:22:10.312 "data_size": 63488 00:22:10.312 }, 00:22:10.312 { 00:22:10.312 "name": "BaseBdev3", 00:22:10.312 "uuid": "d62e6313-cc02-40aa-92c5-6b4ce27a0c0a", 00:22:10.312 "is_configured": true, 00:22:10.312 "data_offset": 2048, 00:22:10.312 "data_size": 63488 00:22:10.312 }, 00:22:10.312 { 00:22:10.312 "name": "BaseBdev4", 00:22:10.312 "uuid": "5ff5a61e-eaa1-4b06-a251-f1210f1966c7", 00:22:10.312 "is_configured": true, 00:22:10.312 "data_offset": 2048, 00:22:10.312 "data_size": 63488 00:22:10.312 } 00:22:10.312 ] 00:22:10.312 }' 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.312 17:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.570 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.570 [2024-11-08 17:09:47.261792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.829 [2024-11-08 17:09:47.363384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.829 [2024-11-08 17:09:47.459020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:10.829 [2024-11-08 17:09:47.459132] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.829 [2024-11-08 17:09:47.522700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.829 [2024-11-08 17:09:47.522781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.829 [2024-11-08 17:09:47.522795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:10.829 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.088 BaseBdev2 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.088 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.088 [ 00:22:11.088 { 00:22:11.088 "name": "BaseBdev2", 00:22:11.088 "aliases": [ 00:22:11.088 "98cc9116-4fbd-4a57-9d6a-28b3bab1219e" 00:22:11.088 ], 00:22:11.088 "product_name": "Malloc disk", 00:22:11.088 "block_size": 512, 00:22:11.088 "num_blocks": 65536, 00:22:11.088 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:11.088 "assigned_rate_limits": { 00:22:11.088 "rw_ios_per_sec": 0, 00:22:11.088 "rw_mbytes_per_sec": 0, 00:22:11.088 "r_mbytes_per_sec": 0, 00:22:11.089 "w_mbytes_per_sec": 0 00:22:11.089 }, 00:22:11.089 "claimed": false, 00:22:11.089 "zoned": false, 00:22:11.089 "supported_io_types": { 00:22:11.089 "read": true, 00:22:11.089 "write": true, 00:22:11.089 "unmap": true, 00:22:11.089 "flush": true, 00:22:11.089 "reset": true, 00:22:11.089 "nvme_admin": false, 00:22:11.089 "nvme_io": false, 00:22:11.089 "nvme_io_md": false, 00:22:11.089 "write_zeroes": true, 00:22:11.089 "zcopy": true, 00:22:11.089 "get_zone_info": false, 00:22:11.089 "zone_management": false, 00:22:11.089 "zone_append": false, 00:22:11.089 "compare": false, 00:22:11.089 "compare_and_write": false, 00:22:11.089 "abort": true, 00:22:11.089 "seek_hole": false, 00:22:11.089 "seek_data": false, 00:22:11.089 "copy": true, 00:22:11.089 "nvme_iov_md": false 00:22:11.089 }, 00:22:11.089 "memory_domains": [ 00:22:11.089 { 00:22:11.089 "dma_device_id": "system", 00:22:11.089 "dma_device_type": 1 00:22:11.089 }, 00:22:11.089 { 00:22:11.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.089 "dma_device_type": 2 00:22:11.089 } 00:22:11.089 ], 00:22:11.089 "driver_specific": {} 00:22:11.089 } 00:22:11.089 ] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.089 BaseBdev3 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.089 [ 00:22:11.089 { 00:22:11.089 "name": "BaseBdev3", 00:22:11.089 "aliases": [ 00:22:11.089 "cfe7e697-dfde-498a-8046-a110c2ccc4dd" 00:22:11.089 ], 00:22:11.089 "product_name": "Malloc disk", 00:22:11.089 "block_size": 512, 00:22:11.089 "num_blocks": 65536, 00:22:11.089 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:11.089 "assigned_rate_limits": { 00:22:11.089 "rw_ios_per_sec": 0, 00:22:11.089 "rw_mbytes_per_sec": 0, 00:22:11.089 "r_mbytes_per_sec": 0, 00:22:11.089 "w_mbytes_per_sec": 0 00:22:11.089 }, 00:22:11.089 "claimed": false, 00:22:11.089 "zoned": false, 00:22:11.089 "supported_io_types": { 00:22:11.089 "read": true, 00:22:11.089 "write": true, 00:22:11.089 "unmap": true, 00:22:11.089 "flush": true, 00:22:11.089 "reset": true, 00:22:11.089 "nvme_admin": false, 00:22:11.089 "nvme_io": false, 00:22:11.089 "nvme_io_md": false, 00:22:11.089 "write_zeroes": true, 00:22:11.089 "zcopy": true, 00:22:11.089 "get_zone_info": false, 00:22:11.089 "zone_management": false, 00:22:11.089 "zone_append": false, 00:22:11.089 "compare": false, 00:22:11.089 "compare_and_write": false, 00:22:11.089 "abort": true, 00:22:11.089 "seek_hole": false, 00:22:11.089 "seek_data": false, 00:22:11.089 "copy": true, 00:22:11.089 "nvme_iov_md": false 00:22:11.089 }, 00:22:11.089 "memory_domains": [ 00:22:11.089 { 00:22:11.089 "dma_device_id": "system", 00:22:11.089 "dma_device_type": 1 00:22:11.089 }, 00:22:11.089 { 00:22:11.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.089 "dma_device_type": 2 00:22:11.089 } 00:22:11.089 ], 00:22:11.089 "driver_specific": {} 00:22:11.089 } 00:22:11.089 ] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.089 BaseBdev4 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:11.089 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.090 [ 00:22:11.090 { 00:22:11.090 "name": "BaseBdev4", 00:22:11.090 "aliases": [ 00:22:11.090 "1062a921-83c1-4de3-8e39-d20309cd96ad" 00:22:11.090 ], 00:22:11.090 "product_name": "Malloc disk", 00:22:11.090 "block_size": 512, 00:22:11.090 "num_blocks": 65536, 00:22:11.090 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:11.090 "assigned_rate_limits": { 00:22:11.090 "rw_ios_per_sec": 0, 00:22:11.090 "rw_mbytes_per_sec": 0, 00:22:11.090 "r_mbytes_per_sec": 0, 00:22:11.090 "w_mbytes_per_sec": 0 00:22:11.090 }, 00:22:11.090 "claimed": false, 00:22:11.090 "zoned": false, 00:22:11.090 "supported_io_types": { 00:22:11.090 "read": true, 00:22:11.090 "write": true, 00:22:11.090 "unmap": true, 00:22:11.090 "flush": true, 00:22:11.090 "reset": true, 00:22:11.090 "nvme_admin": false, 00:22:11.090 "nvme_io": false, 00:22:11.090 "nvme_io_md": false, 00:22:11.090 "write_zeroes": true, 00:22:11.090 "zcopy": true, 00:22:11.090 "get_zone_info": false, 00:22:11.090 "zone_management": false, 00:22:11.090 "zone_append": false, 00:22:11.090 "compare": false, 00:22:11.090 "compare_and_write": false, 00:22:11.090 "abort": true, 00:22:11.090 "seek_hole": false, 00:22:11.090 "seek_data": false, 00:22:11.090 "copy": true, 00:22:11.090 "nvme_iov_md": false 00:22:11.090 }, 00:22:11.090 "memory_domains": [ 00:22:11.090 { 00:22:11.090 "dma_device_id": "system", 00:22:11.090 "dma_device_type": 1 00:22:11.090 }, 00:22:11.090 { 00:22:11.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.090 "dma_device_type": 2 00:22:11.090 } 00:22:11.090 ], 00:22:11.090 "driver_specific": {} 00:22:11.090 } 00:22:11.090 ] 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.090 [2024-11-08 17:09:47.743004] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:11.090 [2024-11-08 17:09:47.743152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:11.090 [2024-11-08 17:09:47.743223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.090 [2024-11-08 17:09:47.745170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:11.090 [2024-11-08 17:09:47.745302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.090 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.090 "name": "Existed_Raid", 00:22:11.090 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:11.090 "strip_size_kb": 0, 00:22:11.090 "state": "configuring", 00:22:11.090 "raid_level": "raid1", 00:22:11.090 "superblock": true, 00:22:11.090 "num_base_bdevs": 4, 00:22:11.090 "num_base_bdevs_discovered": 3, 00:22:11.090 "num_base_bdevs_operational": 4, 00:22:11.090 "base_bdevs_list": [ 00:22:11.090 { 00:22:11.090 "name": "BaseBdev1", 00:22:11.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.090 "is_configured": false, 00:22:11.090 "data_offset": 0, 00:22:11.090 "data_size": 0 00:22:11.090 }, 00:22:11.090 { 00:22:11.090 "name": "BaseBdev2", 00:22:11.090 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:11.090 "is_configured": true, 00:22:11.090 "data_offset": 2048, 00:22:11.091 "data_size": 63488 00:22:11.091 }, 00:22:11.091 { 00:22:11.091 "name": "BaseBdev3", 00:22:11.091 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:11.091 "is_configured": true, 00:22:11.091 "data_offset": 2048, 00:22:11.091 "data_size": 63488 00:22:11.091 }, 00:22:11.091 { 00:22:11.091 "name": "BaseBdev4", 00:22:11.091 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:11.091 "is_configured": true, 00:22:11.091 "data_offset": 2048, 00:22:11.091 "data_size": 63488 00:22:11.091 } 00:22:11.091 ] 00:22:11.091 }' 00:22:11.091 17:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.091 17:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.665 [2024-11-08 17:09:48.071118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.665 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.666 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.666 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.666 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.666 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.666 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.666 "name": "Existed_Raid", 00:22:11.666 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:11.666 "strip_size_kb": 0, 00:22:11.666 "state": "configuring", 00:22:11.666 "raid_level": "raid1", 00:22:11.666 "superblock": true, 00:22:11.666 "num_base_bdevs": 4, 00:22:11.666 "num_base_bdevs_discovered": 2, 00:22:11.666 "num_base_bdevs_operational": 4, 00:22:11.666 "base_bdevs_list": [ 00:22:11.666 { 00:22:11.666 "name": "BaseBdev1", 00:22:11.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.666 "is_configured": false, 00:22:11.666 "data_offset": 0, 00:22:11.666 "data_size": 0 00:22:11.666 }, 00:22:11.666 { 00:22:11.666 "name": null, 00:22:11.666 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:11.666 "is_configured": false, 00:22:11.666 "data_offset": 0, 00:22:11.666 "data_size": 63488 00:22:11.666 }, 00:22:11.666 { 00:22:11.666 "name": "BaseBdev3", 00:22:11.666 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:11.666 "is_configured": true, 00:22:11.666 "data_offset": 2048, 00:22:11.666 "data_size": 63488 00:22:11.666 }, 00:22:11.666 { 00:22:11.666 "name": "BaseBdev4", 00:22:11.666 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:11.666 "is_configured": true, 00:22:11.667 "data_offset": 2048, 00:22:11.667 "data_size": 63488 00:22:11.667 } 00:22:11.667 ] 00:22:11.667 }' 00:22:11.667 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.667 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.931 [2024-11-08 17:09:48.464002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.931 BaseBdev1 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.931 [ 00:22:11.931 { 00:22:11.931 "name": "BaseBdev1", 00:22:11.931 "aliases": [ 00:22:11.931 "9bc067c5-8491-440e-b203-576b2c126aed" 00:22:11.931 ], 00:22:11.931 "product_name": "Malloc disk", 00:22:11.931 "block_size": 512, 00:22:11.931 "num_blocks": 65536, 00:22:11.931 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:11.931 "assigned_rate_limits": { 00:22:11.931 "rw_ios_per_sec": 0, 00:22:11.931 "rw_mbytes_per_sec": 0, 00:22:11.931 "r_mbytes_per_sec": 0, 00:22:11.931 "w_mbytes_per_sec": 0 00:22:11.931 }, 00:22:11.931 "claimed": true, 00:22:11.931 "claim_type": "exclusive_write", 00:22:11.931 "zoned": false, 00:22:11.931 "supported_io_types": { 00:22:11.931 "read": true, 00:22:11.931 "write": true, 00:22:11.931 "unmap": true, 00:22:11.931 "flush": true, 00:22:11.931 "reset": true, 00:22:11.931 "nvme_admin": false, 00:22:11.931 "nvme_io": false, 00:22:11.931 "nvme_io_md": false, 00:22:11.931 "write_zeroes": true, 00:22:11.931 "zcopy": true, 00:22:11.931 "get_zone_info": false, 00:22:11.931 "zone_management": false, 00:22:11.931 "zone_append": false, 00:22:11.931 "compare": false, 00:22:11.931 "compare_and_write": false, 00:22:11.931 "abort": true, 00:22:11.931 "seek_hole": false, 00:22:11.931 "seek_data": false, 00:22:11.931 "copy": true, 00:22:11.931 "nvme_iov_md": false 00:22:11.931 }, 00:22:11.931 "memory_domains": [ 00:22:11.931 { 00:22:11.931 "dma_device_id": "system", 00:22:11.931 "dma_device_type": 1 00:22:11.931 }, 00:22:11.931 { 00:22:11.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.931 "dma_device_type": 2 00:22:11.931 } 00:22:11.931 ], 00:22:11.931 "driver_specific": {} 00:22:11.931 } 00:22:11.931 ] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.931 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.931 "name": "Existed_Raid", 00:22:11.931 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:11.931 "strip_size_kb": 0, 00:22:11.931 "state": "configuring", 00:22:11.931 "raid_level": "raid1", 00:22:11.931 "superblock": true, 00:22:11.931 "num_base_bdevs": 4, 00:22:11.931 "num_base_bdevs_discovered": 3, 00:22:11.931 "num_base_bdevs_operational": 4, 00:22:11.931 "base_bdevs_list": [ 00:22:11.931 { 00:22:11.931 "name": "BaseBdev1", 00:22:11.931 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:11.931 "is_configured": true, 00:22:11.931 "data_offset": 2048, 00:22:11.931 "data_size": 63488 00:22:11.931 }, 00:22:11.931 { 00:22:11.931 "name": null, 00:22:11.931 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:11.931 "is_configured": false, 00:22:11.931 "data_offset": 0, 00:22:11.932 "data_size": 63488 00:22:11.932 }, 00:22:11.932 { 00:22:11.932 "name": "BaseBdev3", 00:22:11.932 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:11.932 "is_configured": true, 00:22:11.932 "data_offset": 2048, 00:22:11.932 "data_size": 63488 00:22:11.932 }, 00:22:11.932 { 00:22:11.932 "name": "BaseBdev4", 00:22:11.932 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:11.932 "is_configured": true, 00:22:11.932 "data_offset": 2048, 00:22:11.932 "data_size": 63488 00:22:11.932 } 00:22:11.932 ] 00:22:11.932 }' 00:22:11.932 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.932 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 [2024-11-08 17:09:48.836167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.190 "name": "Existed_Raid", 00:22:12.190 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:12.190 "strip_size_kb": 0, 00:22:12.190 "state": "configuring", 00:22:12.190 "raid_level": "raid1", 00:22:12.190 "superblock": true, 00:22:12.190 "num_base_bdevs": 4, 00:22:12.190 "num_base_bdevs_discovered": 2, 00:22:12.190 "num_base_bdevs_operational": 4, 00:22:12.190 "base_bdevs_list": [ 00:22:12.190 { 00:22:12.190 "name": "BaseBdev1", 00:22:12.190 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:12.190 "is_configured": true, 00:22:12.190 "data_offset": 2048, 00:22:12.190 "data_size": 63488 00:22:12.190 }, 00:22:12.190 { 00:22:12.190 "name": null, 00:22:12.190 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:12.190 "is_configured": false, 00:22:12.190 "data_offset": 0, 00:22:12.190 "data_size": 63488 00:22:12.190 }, 00:22:12.190 { 00:22:12.190 "name": null, 00:22:12.190 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:12.190 "is_configured": false, 00:22:12.190 "data_offset": 0, 00:22:12.190 "data_size": 63488 00:22:12.190 }, 00:22:12.190 { 00:22:12.190 "name": "BaseBdev4", 00:22:12.190 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:12.190 "is_configured": true, 00:22:12.190 "data_offset": 2048, 00:22:12.190 "data_size": 63488 00:22:12.190 } 00:22:12.190 ] 00:22:12.190 }' 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.190 17:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.755 [2024-11-08 17:09:49.252259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.755 "name": "Existed_Raid", 00:22:12.755 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:12.755 "strip_size_kb": 0, 00:22:12.755 "state": "configuring", 00:22:12.755 "raid_level": "raid1", 00:22:12.755 "superblock": true, 00:22:12.755 "num_base_bdevs": 4, 00:22:12.755 "num_base_bdevs_discovered": 3, 00:22:12.755 "num_base_bdevs_operational": 4, 00:22:12.755 "base_bdevs_list": [ 00:22:12.755 { 00:22:12.755 "name": "BaseBdev1", 00:22:12.755 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:12.755 "is_configured": true, 00:22:12.755 "data_offset": 2048, 00:22:12.755 "data_size": 63488 00:22:12.755 }, 00:22:12.755 { 00:22:12.755 "name": null, 00:22:12.755 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:12.755 "is_configured": false, 00:22:12.755 "data_offset": 0, 00:22:12.755 "data_size": 63488 00:22:12.755 }, 00:22:12.755 { 00:22:12.755 "name": "BaseBdev3", 00:22:12.755 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:12.755 "is_configured": true, 00:22:12.755 "data_offset": 2048, 00:22:12.755 "data_size": 63488 00:22:12.755 }, 00:22:12.755 { 00:22:12.755 "name": "BaseBdev4", 00:22:12.755 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:12.755 "is_configured": true, 00:22:12.755 "data_offset": 2048, 00:22:12.755 "data_size": 63488 00:22:12.755 } 00:22:12.755 ] 00:22:12.755 }' 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.755 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.036 [2024-11-08 17:09:49.632397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.036 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.037 "name": "Existed_Raid", 00:22:13.037 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:13.037 "strip_size_kb": 0, 00:22:13.037 "state": "configuring", 00:22:13.037 "raid_level": "raid1", 00:22:13.037 "superblock": true, 00:22:13.037 "num_base_bdevs": 4, 00:22:13.037 "num_base_bdevs_discovered": 2, 00:22:13.037 "num_base_bdevs_operational": 4, 00:22:13.037 "base_bdevs_list": [ 00:22:13.037 { 00:22:13.037 "name": null, 00:22:13.037 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:13.037 "is_configured": false, 00:22:13.037 "data_offset": 0, 00:22:13.037 "data_size": 63488 00:22:13.037 }, 00:22:13.037 { 00:22:13.037 "name": null, 00:22:13.037 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:13.037 "is_configured": false, 00:22:13.037 "data_offset": 0, 00:22:13.037 "data_size": 63488 00:22:13.037 }, 00:22:13.037 { 00:22:13.037 "name": "BaseBdev3", 00:22:13.037 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:13.037 "is_configured": true, 00:22:13.037 "data_offset": 2048, 00:22:13.037 "data_size": 63488 00:22:13.037 }, 00:22:13.037 { 00:22:13.037 "name": "BaseBdev4", 00:22:13.037 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:13.037 "is_configured": true, 00:22:13.037 "data_offset": 2048, 00:22:13.037 "data_size": 63488 00:22:13.037 } 00:22:13.037 ] 00:22:13.037 }' 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.037 17:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.295 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:13.295 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.295 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.295 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.556 [2024-11-08 17:09:50.035794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.556 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.557 "name": "Existed_Raid", 00:22:13.557 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:13.557 "strip_size_kb": 0, 00:22:13.557 "state": "configuring", 00:22:13.557 "raid_level": "raid1", 00:22:13.557 "superblock": true, 00:22:13.557 "num_base_bdevs": 4, 00:22:13.557 "num_base_bdevs_discovered": 3, 00:22:13.557 "num_base_bdevs_operational": 4, 00:22:13.557 "base_bdevs_list": [ 00:22:13.557 { 00:22:13.557 "name": null, 00:22:13.557 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:13.557 "is_configured": false, 00:22:13.557 "data_offset": 0, 00:22:13.557 "data_size": 63488 00:22:13.557 }, 00:22:13.557 { 00:22:13.557 "name": "BaseBdev2", 00:22:13.557 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:13.557 "is_configured": true, 00:22:13.557 "data_offset": 2048, 00:22:13.557 "data_size": 63488 00:22:13.557 }, 00:22:13.557 { 00:22:13.557 "name": "BaseBdev3", 00:22:13.557 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:13.557 "is_configured": true, 00:22:13.557 "data_offset": 2048, 00:22:13.557 "data_size": 63488 00:22:13.557 }, 00:22:13.557 { 00:22:13.557 "name": "BaseBdev4", 00:22:13.557 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:13.557 "is_configured": true, 00:22:13.557 "data_offset": 2048, 00:22:13.557 "data_size": 63488 00:22:13.557 } 00:22:13.557 ] 00:22:13.557 }' 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.557 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9bc067c5-8491-440e-b203-576b2c126aed 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.818 [2024-11-08 17:09:50.449628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.818 [2024-11-08 17:09:50.449905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:13.818 [2024-11-08 17:09:50.449922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:13.818 NewBaseBdev 00:22:13.818 [2024-11-08 17:09:50.450201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:13.818 [2024-11-08 17:09:50.450358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:13.818 [2024-11-08 17:09:50.450367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:13.818 [2024-11-08 17:09:50.450496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.818 [ 00:22:13.818 { 00:22:13.818 "name": "NewBaseBdev", 00:22:13.818 "aliases": [ 00:22:13.818 "9bc067c5-8491-440e-b203-576b2c126aed" 00:22:13.818 ], 00:22:13.818 "product_name": "Malloc disk", 00:22:13.818 "block_size": 512, 00:22:13.818 "num_blocks": 65536, 00:22:13.818 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:13.818 "assigned_rate_limits": { 00:22:13.818 "rw_ios_per_sec": 0, 00:22:13.818 "rw_mbytes_per_sec": 0, 00:22:13.818 "r_mbytes_per_sec": 0, 00:22:13.818 "w_mbytes_per_sec": 0 00:22:13.818 }, 00:22:13.818 "claimed": true, 00:22:13.818 "claim_type": "exclusive_write", 00:22:13.818 "zoned": false, 00:22:13.818 "supported_io_types": { 00:22:13.818 "read": true, 00:22:13.818 "write": true, 00:22:13.818 "unmap": true, 00:22:13.818 "flush": true, 00:22:13.818 "reset": true, 00:22:13.818 "nvme_admin": false, 00:22:13.818 "nvme_io": false, 00:22:13.818 "nvme_io_md": false, 00:22:13.818 "write_zeroes": true, 00:22:13.818 "zcopy": true, 00:22:13.818 "get_zone_info": false, 00:22:13.818 "zone_management": false, 00:22:13.818 "zone_append": false, 00:22:13.818 "compare": false, 00:22:13.818 "compare_and_write": false, 00:22:13.818 "abort": true, 00:22:13.818 "seek_hole": false, 00:22:13.818 "seek_data": false, 00:22:13.818 "copy": true, 00:22:13.818 "nvme_iov_md": false 00:22:13.818 }, 00:22:13.818 "memory_domains": [ 00:22:13.818 { 00:22:13.818 "dma_device_id": "system", 00:22:13.818 "dma_device_type": 1 00:22:13.818 }, 00:22:13.818 { 00:22:13.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.818 "dma_device_type": 2 00:22:13.818 } 00:22:13.818 ], 00:22:13.818 "driver_specific": {} 00:22:13.818 } 00:22:13.818 ] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.818 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.818 "name": "Existed_Raid", 00:22:13.818 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:13.818 "strip_size_kb": 0, 00:22:13.818 "state": "online", 00:22:13.818 "raid_level": "raid1", 00:22:13.818 "superblock": true, 00:22:13.818 "num_base_bdevs": 4, 00:22:13.818 "num_base_bdevs_discovered": 4, 00:22:13.818 "num_base_bdevs_operational": 4, 00:22:13.818 "base_bdevs_list": [ 00:22:13.818 { 00:22:13.818 "name": "NewBaseBdev", 00:22:13.818 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:13.818 "is_configured": true, 00:22:13.818 "data_offset": 2048, 00:22:13.818 "data_size": 63488 00:22:13.818 }, 00:22:13.818 { 00:22:13.818 "name": "BaseBdev2", 00:22:13.818 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:13.818 "is_configured": true, 00:22:13.818 "data_offset": 2048, 00:22:13.818 "data_size": 63488 00:22:13.818 }, 00:22:13.818 { 00:22:13.818 "name": "BaseBdev3", 00:22:13.818 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:13.818 "is_configured": true, 00:22:13.818 "data_offset": 2048, 00:22:13.818 "data_size": 63488 00:22:13.818 }, 00:22:13.818 { 00:22:13.818 "name": "BaseBdev4", 00:22:13.819 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:13.819 "is_configured": true, 00:22:13.819 "data_offset": 2048, 00:22:13.819 "data_size": 63488 00:22:13.819 } 00:22:13.819 ] 00:22:13.819 }' 00:22:13.819 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.819 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.432 [2024-11-08 17:09:50.826172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.432 "name": "Existed_Raid", 00:22:14.432 "aliases": [ 00:22:14.432 "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79" 00:22:14.432 ], 00:22:14.432 "product_name": "Raid Volume", 00:22:14.432 "block_size": 512, 00:22:14.432 "num_blocks": 63488, 00:22:14.432 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:14.432 "assigned_rate_limits": { 00:22:14.432 "rw_ios_per_sec": 0, 00:22:14.432 "rw_mbytes_per_sec": 0, 00:22:14.432 "r_mbytes_per_sec": 0, 00:22:14.432 "w_mbytes_per_sec": 0 00:22:14.432 }, 00:22:14.432 "claimed": false, 00:22:14.432 "zoned": false, 00:22:14.432 "supported_io_types": { 00:22:14.432 "read": true, 00:22:14.432 "write": true, 00:22:14.432 "unmap": false, 00:22:14.432 "flush": false, 00:22:14.432 "reset": true, 00:22:14.432 "nvme_admin": false, 00:22:14.432 "nvme_io": false, 00:22:14.432 "nvme_io_md": false, 00:22:14.432 "write_zeroes": true, 00:22:14.432 "zcopy": false, 00:22:14.432 "get_zone_info": false, 00:22:14.432 "zone_management": false, 00:22:14.432 "zone_append": false, 00:22:14.432 "compare": false, 00:22:14.432 "compare_and_write": false, 00:22:14.432 "abort": false, 00:22:14.432 "seek_hole": false, 00:22:14.432 "seek_data": false, 00:22:14.432 "copy": false, 00:22:14.432 "nvme_iov_md": false 00:22:14.432 }, 00:22:14.432 "memory_domains": [ 00:22:14.432 { 00:22:14.432 "dma_device_id": "system", 00:22:14.432 "dma_device_type": 1 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.432 "dma_device_type": 2 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "dma_device_id": "system", 00:22:14.432 "dma_device_type": 1 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.432 "dma_device_type": 2 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "dma_device_id": "system", 00:22:14.432 "dma_device_type": 1 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.432 "dma_device_type": 2 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "dma_device_id": "system", 00:22:14.432 "dma_device_type": 1 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.432 "dma_device_type": 2 00:22:14.432 } 00:22:14.432 ], 00:22:14.432 "driver_specific": { 00:22:14.432 "raid": { 00:22:14.432 "uuid": "6ad11d3a-8f41-4e1a-85d2-8c3df5e62c79", 00:22:14.432 "strip_size_kb": 0, 00:22:14.432 "state": "online", 00:22:14.432 "raid_level": "raid1", 00:22:14.432 "superblock": true, 00:22:14.432 "num_base_bdevs": 4, 00:22:14.432 "num_base_bdevs_discovered": 4, 00:22:14.432 "num_base_bdevs_operational": 4, 00:22:14.432 "base_bdevs_list": [ 00:22:14.432 { 00:22:14.432 "name": "NewBaseBdev", 00:22:14.432 "uuid": "9bc067c5-8491-440e-b203-576b2c126aed", 00:22:14.432 "is_configured": true, 00:22:14.432 "data_offset": 2048, 00:22:14.432 "data_size": 63488 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "name": "BaseBdev2", 00:22:14.432 "uuid": "98cc9116-4fbd-4a57-9d6a-28b3bab1219e", 00:22:14.432 "is_configured": true, 00:22:14.432 "data_offset": 2048, 00:22:14.432 "data_size": 63488 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "name": "BaseBdev3", 00:22:14.432 "uuid": "cfe7e697-dfde-498a-8046-a110c2ccc4dd", 00:22:14.432 "is_configured": true, 00:22:14.432 "data_offset": 2048, 00:22:14.432 "data_size": 63488 00:22:14.432 }, 00:22:14.432 { 00:22:14.432 "name": "BaseBdev4", 00:22:14.432 "uuid": "1062a921-83c1-4de3-8e39-d20309cd96ad", 00:22:14.432 "is_configured": true, 00:22:14.432 "data_offset": 2048, 00:22:14.432 "data_size": 63488 00:22:14.432 } 00:22:14.432 ] 00:22:14.432 } 00:22:14.432 } 00:22:14.432 }' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:14.432 BaseBdev2 00:22:14.432 BaseBdev3 00:22:14.432 BaseBdev4' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.432 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.433 17:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.433 [2024-11-08 17:09:51.041854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.433 [2024-11-08 17:09:51.041991] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.433 [2024-11-08 17:09:51.042095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.433 [2024-11-08 17:09:51.042413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.433 [2024-11-08 17:09:51.042427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72297 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72297 ']' 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72297 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72297 00:22:14.433 killing process with pid 72297 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72297' 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72297 00:22:14.433 [2024-11-08 17:09:51.075093] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:14.433 17:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72297 00:22:14.694 [2024-11-08 17:09:51.350012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.639 ************************************ 00:22:15.639 END TEST raid_state_function_test_sb 00:22:15.639 ************************************ 00:22:15.639 17:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:15.639 00:22:15.639 real 0m8.757s 00:22:15.639 user 0m13.785s 00:22:15.639 sys 0m1.383s 00:22:15.639 17:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:15.639 17:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.639 17:09:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:15.639 17:09:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:15.639 17:09:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:15.639 17:09:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:15.639 ************************************ 00:22:15.639 START TEST raid_superblock_test 00:22:15.639 ************************************ 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:15.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72940 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72940 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72940 ']' 00:22:15.639 17:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.640 17:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:15.640 17:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:15.640 17:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.640 17:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:15.640 17:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.916 [2024-11-08 17:09:52.388827] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:22:15.916 [2024-11-08 17:09:52.389268] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72940 ] 00:22:15.916 [2024-11-08 17:09:52.554121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.177 [2024-11-08 17:09:52.718150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.440 [2024-11-08 17:09:52.901463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.440 [2024-11-08 17:09:52.901550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.702 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:16.702 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:22:16.702 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:16.702 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.703 malloc1 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.703 [2024-11-08 17:09:53.385899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:16.703 [2024-11-08 17:09:53.386003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.703 [2024-11-08 17:09:53.386036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:16.703 [2024-11-08 17:09:53.386048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.703 [2024-11-08 17:09:53.388886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.703 [2024-11-08 17:09:53.388942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:16.703 pt1 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.703 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.964 malloc2 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.964 [2024-11-08 17:09:53.438793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:16.964 [2024-11-08 17:09:53.439055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.964 [2024-11-08 17:09:53.439093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:16.964 [2024-11-08 17:09:53.439103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.964 [2024-11-08 17:09:53.441846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.964 [2024-11-08 17:09:53.441895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:16.964 pt2 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.964 malloc3 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.964 [2024-11-08 17:09:53.504500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:16.964 [2024-11-08 17:09:53.504584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.964 [2024-11-08 17:09:53.504613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:16.964 [2024-11-08 17:09:53.504624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.964 [2024-11-08 17:09:53.507411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.964 [2024-11-08 17:09:53.507471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:16.964 pt3 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.964 malloc4 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.964 [2024-11-08 17:09:53.557407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:16.964 [2024-11-08 17:09:53.557495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.964 [2024-11-08 17:09:53.557521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:16.964 [2024-11-08 17:09:53.557533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.964 [2024-11-08 17:09:53.560279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.964 [2024-11-08 17:09:53.560504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:16.964 pt4 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.964 [2024-11-08 17:09:53.569504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:16.964 [2024-11-08 17:09:53.571913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:16.964 [2024-11-08 17:09:53.572004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:16.964 [2024-11-08 17:09:53.572058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:16.964 [2024-11-08 17:09:53.572287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:16.964 [2024-11-08 17:09:53.572305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:16.964 [2024-11-08 17:09:53.572659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:16.964 [2024-11-08 17:09:53.572900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:16.964 [2024-11-08 17:09:53.572919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:16.964 [2024-11-08 17:09:53.573104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.964 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.965 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.965 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.965 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.965 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.965 "name": "raid_bdev1", 00:22:16.965 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:16.965 "strip_size_kb": 0, 00:22:16.965 "state": "online", 00:22:16.965 "raid_level": "raid1", 00:22:16.965 "superblock": true, 00:22:16.965 "num_base_bdevs": 4, 00:22:16.965 "num_base_bdevs_discovered": 4, 00:22:16.965 "num_base_bdevs_operational": 4, 00:22:16.965 "base_bdevs_list": [ 00:22:16.965 { 00:22:16.965 "name": "pt1", 00:22:16.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:16.965 "is_configured": true, 00:22:16.965 "data_offset": 2048, 00:22:16.965 "data_size": 63488 00:22:16.965 }, 00:22:16.965 { 00:22:16.965 "name": "pt2", 00:22:16.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.965 "is_configured": true, 00:22:16.965 "data_offset": 2048, 00:22:16.965 "data_size": 63488 00:22:16.965 }, 00:22:16.965 { 00:22:16.965 "name": "pt3", 00:22:16.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:16.965 "is_configured": true, 00:22:16.965 "data_offset": 2048, 00:22:16.965 "data_size": 63488 00:22:16.965 }, 00:22:16.965 { 00:22:16.965 "name": "pt4", 00:22:16.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:16.965 "is_configured": true, 00:22:16.965 "data_offset": 2048, 00:22:16.965 "data_size": 63488 00:22:16.965 } 00:22:16.965 ] 00:22:16.965 }' 00:22:16.965 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.965 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:17.225 [2024-11-08 17:09:53.910020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.225 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:17.225 "name": "raid_bdev1", 00:22:17.225 "aliases": [ 00:22:17.225 "82f1cf11-675d-4557-9c82-006adea0454e" 00:22:17.225 ], 00:22:17.225 "product_name": "Raid Volume", 00:22:17.225 "block_size": 512, 00:22:17.225 "num_blocks": 63488, 00:22:17.225 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:17.225 "assigned_rate_limits": { 00:22:17.225 "rw_ios_per_sec": 0, 00:22:17.225 "rw_mbytes_per_sec": 0, 00:22:17.225 "r_mbytes_per_sec": 0, 00:22:17.225 "w_mbytes_per_sec": 0 00:22:17.225 }, 00:22:17.225 "claimed": false, 00:22:17.225 "zoned": false, 00:22:17.225 "supported_io_types": { 00:22:17.225 "read": true, 00:22:17.225 "write": true, 00:22:17.225 "unmap": false, 00:22:17.225 "flush": false, 00:22:17.225 "reset": true, 00:22:17.225 "nvme_admin": false, 00:22:17.225 "nvme_io": false, 00:22:17.225 "nvme_io_md": false, 00:22:17.225 "write_zeroes": true, 00:22:17.225 "zcopy": false, 00:22:17.225 "get_zone_info": false, 00:22:17.225 "zone_management": false, 00:22:17.225 "zone_append": false, 00:22:17.225 "compare": false, 00:22:17.225 "compare_and_write": false, 00:22:17.225 "abort": false, 00:22:17.225 "seek_hole": false, 00:22:17.225 "seek_data": false, 00:22:17.225 "copy": false, 00:22:17.225 "nvme_iov_md": false 00:22:17.225 }, 00:22:17.225 "memory_domains": [ 00:22:17.225 { 00:22:17.226 "dma_device_id": "system", 00:22:17.226 "dma_device_type": 1 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.226 "dma_device_type": 2 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "dma_device_id": "system", 00:22:17.226 "dma_device_type": 1 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.226 "dma_device_type": 2 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "dma_device_id": "system", 00:22:17.226 "dma_device_type": 1 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.226 "dma_device_type": 2 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "dma_device_id": "system", 00:22:17.226 "dma_device_type": 1 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.226 "dma_device_type": 2 00:22:17.226 } 00:22:17.226 ], 00:22:17.226 "driver_specific": { 00:22:17.226 "raid": { 00:22:17.226 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:17.226 "strip_size_kb": 0, 00:22:17.226 "state": "online", 00:22:17.226 "raid_level": "raid1", 00:22:17.226 "superblock": true, 00:22:17.226 "num_base_bdevs": 4, 00:22:17.226 "num_base_bdevs_discovered": 4, 00:22:17.226 "num_base_bdevs_operational": 4, 00:22:17.226 "base_bdevs_list": [ 00:22:17.226 { 00:22:17.226 "name": "pt1", 00:22:17.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.226 "is_configured": true, 00:22:17.226 "data_offset": 2048, 00:22:17.226 "data_size": 63488 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "name": "pt2", 00:22:17.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.226 "is_configured": true, 00:22:17.226 "data_offset": 2048, 00:22:17.226 "data_size": 63488 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "name": "pt3", 00:22:17.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.226 "is_configured": true, 00:22:17.226 "data_offset": 2048, 00:22:17.226 "data_size": 63488 00:22:17.226 }, 00:22:17.226 { 00:22:17.226 "name": "pt4", 00:22:17.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:17.226 "is_configured": true, 00:22:17.226 "data_offset": 2048, 00:22:17.226 "data_size": 63488 00:22:17.226 } 00:22:17.226 ] 00:22:17.226 } 00:22:17.226 } 00:22:17.226 }' 00:22:17.486 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.486 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:17.486 pt2 00:22:17.486 pt3 00:22:17.486 pt4' 00:22:17.486 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.486 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:17.486 17:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.486 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:17.486 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.487 [2024-11-08 17:09:54.137949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=82f1cf11-675d-4557-9c82-006adea0454e 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 82f1cf11-675d-4557-9c82-006adea0454e ']' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.487 [2024-11-08 17:09:54.161551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.487 [2024-11-08 17:09:54.161604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.487 [2024-11-08 17:09:54.161716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.487 [2024-11-08 17:09:54.161853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.487 [2024-11-08 17:09:54.161881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.487 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 [2024-11-08 17:09:54.269617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:17.771 [2024-11-08 17:09:54.272047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:17.771 [2024-11-08 17:09:54.272123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:17.771 [2024-11-08 17:09:54.272161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:17.771 [2024-11-08 17:09:54.272227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:17.771 [2024-11-08 17:09:54.272288] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:17.771 [2024-11-08 17:09:54.272307] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:17.771 [2024-11-08 17:09:54.272327] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:17.771 [2024-11-08 17:09:54.272341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.771 [2024-11-08 17:09:54.272355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:17.771 request: 00:22:17.771 { 00:22:17.771 "name": "raid_bdev1", 00:22:17.771 "raid_level": "raid1", 00:22:17.771 "base_bdevs": [ 00:22:17.771 "malloc1", 00:22:17.771 "malloc2", 00:22:17.771 "malloc3", 00:22:17.771 "malloc4" 00:22:17.771 ], 00:22:17.771 "superblock": false, 00:22:17.771 "method": "bdev_raid_create", 00:22:17.771 "req_id": 1 00:22:17.771 } 00:22:17.771 Got JSON-RPC error response 00:22:17.771 response: 00:22:17.771 { 00:22:17.771 "code": -17, 00:22:17.771 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:17.771 } 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 [2024-11-08 17:09:54.313598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.771 [2024-11-08 17:09:54.313667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.771 [2024-11-08 17:09:54.313687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:17.771 [2024-11-08 17:09:54.313700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.771 [2024-11-08 17:09:54.316486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.771 [2024-11-08 17:09:54.316551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.771 [2024-11-08 17:09:54.316644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:17.771 [2024-11-08 17:09:54.316711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.771 pt1 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.771 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.771 "name": "raid_bdev1", 00:22:17.771 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:17.771 "strip_size_kb": 0, 00:22:17.771 "state": "configuring", 00:22:17.771 "raid_level": "raid1", 00:22:17.771 "superblock": true, 00:22:17.771 "num_base_bdevs": 4, 00:22:17.771 "num_base_bdevs_discovered": 1, 00:22:17.771 "num_base_bdevs_operational": 4, 00:22:17.771 "base_bdevs_list": [ 00:22:17.771 { 00:22:17.771 "name": "pt1", 00:22:17.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.771 "is_configured": true, 00:22:17.771 "data_offset": 2048, 00:22:17.771 "data_size": 63488 00:22:17.771 }, 00:22:17.771 { 00:22:17.771 "name": null, 00:22:17.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.771 "is_configured": false, 00:22:17.771 "data_offset": 2048, 00:22:17.771 "data_size": 63488 00:22:17.771 }, 00:22:17.771 { 00:22:17.771 "name": null, 00:22:17.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.771 "is_configured": false, 00:22:17.771 "data_offset": 2048, 00:22:17.771 "data_size": 63488 00:22:17.771 }, 00:22:17.771 { 00:22:17.771 "name": null, 00:22:17.771 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:17.771 "is_configured": false, 00:22:17.771 "data_offset": 2048, 00:22:17.771 "data_size": 63488 00:22:17.772 } 00:22:17.772 ] 00:22:17.772 }' 00:22:17.772 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.772 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.033 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:18.033 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.033 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.033 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.033 [2024-11-08 17:09:54.629778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.033 [2024-11-08 17:09:54.629885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.033 [2024-11-08 17:09:54.629913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:18.033 [2024-11-08 17:09:54.629926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.034 [2024-11-08 17:09:54.630531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.034 [2024-11-08 17:09:54.630552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.034 [2024-11-08 17:09:54.630669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:18.034 [2024-11-08 17:09:54.630705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.034 pt2 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.034 [2024-11-08 17:09:54.637794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.034 "name": "raid_bdev1", 00:22:18.034 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:18.034 "strip_size_kb": 0, 00:22:18.034 "state": "configuring", 00:22:18.034 "raid_level": "raid1", 00:22:18.034 "superblock": true, 00:22:18.034 "num_base_bdevs": 4, 00:22:18.034 "num_base_bdevs_discovered": 1, 00:22:18.034 "num_base_bdevs_operational": 4, 00:22:18.034 "base_bdevs_list": [ 00:22:18.034 { 00:22:18.034 "name": "pt1", 00:22:18.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.034 "is_configured": true, 00:22:18.034 "data_offset": 2048, 00:22:18.034 "data_size": 63488 00:22:18.034 }, 00:22:18.034 { 00:22:18.034 "name": null, 00:22:18.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.034 "is_configured": false, 00:22:18.034 "data_offset": 0, 00:22:18.034 "data_size": 63488 00:22:18.034 }, 00:22:18.034 { 00:22:18.034 "name": null, 00:22:18.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.034 "is_configured": false, 00:22:18.034 "data_offset": 2048, 00:22:18.034 "data_size": 63488 00:22:18.034 }, 00:22:18.034 { 00:22:18.034 "name": null, 00:22:18.034 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.034 "is_configured": false, 00:22:18.034 "data_offset": 2048, 00:22:18.034 "data_size": 63488 00:22:18.034 } 00:22:18.034 ] 00:22:18.034 }' 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.034 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.296 [2024-11-08 17:09:54.957855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.296 [2024-11-08 17:09:54.957948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.296 [2024-11-08 17:09:54.957980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:18.296 [2024-11-08 17:09:54.957992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.296 [2024-11-08 17:09:54.958606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.296 [2024-11-08 17:09:54.958637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.296 [2024-11-08 17:09:54.958772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:18.296 [2024-11-08 17:09:54.958802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.296 pt2 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.296 [2024-11-08 17:09:54.969836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:18.296 [2024-11-08 17:09:54.969920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.296 [2024-11-08 17:09:54.969946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:18.296 [2024-11-08 17:09:54.969957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.296 [2024-11-08 17:09:54.970524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.296 [2024-11-08 17:09:54.970554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:18.296 [2024-11-08 17:09:54.970657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:18.296 [2024-11-08 17:09:54.970683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:18.296 pt3 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.296 [2024-11-08 17:09:54.977775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:18.296 [2024-11-08 17:09:54.977844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.296 [2024-11-08 17:09:54.977866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:18.296 [2024-11-08 17:09:54.977878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.296 [2024-11-08 17:09:54.978399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.296 [2024-11-08 17:09:54.978428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:18.296 [2024-11-08 17:09:54.978508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:18.296 [2024-11-08 17:09:54.978531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:18.296 [2024-11-08 17:09:54.978710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:18.296 [2024-11-08 17:09:54.978720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:18.296 [2024-11-08 17:09:54.979040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:18.296 [2024-11-08 17:09:54.979266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:18.296 [2024-11-08 17:09:54.979278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:18.296 [2024-11-08 17:09:54.979438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.296 pt4 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.296 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.297 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.297 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.297 17:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.297 17:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.297 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.558 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.558 "name": "raid_bdev1", 00:22:18.558 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:18.558 "strip_size_kb": 0, 00:22:18.558 "state": "online", 00:22:18.558 "raid_level": "raid1", 00:22:18.558 "superblock": true, 00:22:18.558 "num_base_bdevs": 4, 00:22:18.558 "num_base_bdevs_discovered": 4, 00:22:18.558 "num_base_bdevs_operational": 4, 00:22:18.558 "base_bdevs_list": [ 00:22:18.558 { 00:22:18.558 "name": "pt1", 00:22:18.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.558 "is_configured": true, 00:22:18.558 "data_offset": 2048, 00:22:18.558 "data_size": 63488 00:22:18.558 }, 00:22:18.558 { 00:22:18.558 "name": "pt2", 00:22:18.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.558 "is_configured": true, 00:22:18.558 "data_offset": 2048, 00:22:18.558 "data_size": 63488 00:22:18.558 }, 00:22:18.558 { 00:22:18.558 "name": "pt3", 00:22:18.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.558 "is_configured": true, 00:22:18.558 "data_offset": 2048, 00:22:18.558 "data_size": 63488 00:22:18.558 }, 00:22:18.558 { 00:22:18.558 "name": "pt4", 00:22:18.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.558 "is_configured": true, 00:22:18.558 "data_offset": 2048, 00:22:18.558 "data_size": 63488 00:22:18.558 } 00:22:18.558 ] 00:22:18.558 }' 00:22:18.558 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.558 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.819 [2024-11-08 17:09:55.286335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.819 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:18.819 "name": "raid_bdev1", 00:22:18.819 "aliases": [ 00:22:18.819 "82f1cf11-675d-4557-9c82-006adea0454e" 00:22:18.820 ], 00:22:18.820 "product_name": "Raid Volume", 00:22:18.820 "block_size": 512, 00:22:18.820 "num_blocks": 63488, 00:22:18.820 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:18.820 "assigned_rate_limits": { 00:22:18.820 "rw_ios_per_sec": 0, 00:22:18.820 "rw_mbytes_per_sec": 0, 00:22:18.820 "r_mbytes_per_sec": 0, 00:22:18.820 "w_mbytes_per_sec": 0 00:22:18.820 }, 00:22:18.820 "claimed": false, 00:22:18.820 "zoned": false, 00:22:18.820 "supported_io_types": { 00:22:18.820 "read": true, 00:22:18.820 "write": true, 00:22:18.820 "unmap": false, 00:22:18.820 "flush": false, 00:22:18.820 "reset": true, 00:22:18.820 "nvme_admin": false, 00:22:18.820 "nvme_io": false, 00:22:18.820 "nvme_io_md": false, 00:22:18.820 "write_zeroes": true, 00:22:18.820 "zcopy": false, 00:22:18.820 "get_zone_info": false, 00:22:18.820 "zone_management": false, 00:22:18.820 "zone_append": false, 00:22:18.820 "compare": false, 00:22:18.820 "compare_and_write": false, 00:22:18.820 "abort": false, 00:22:18.820 "seek_hole": false, 00:22:18.820 "seek_data": false, 00:22:18.820 "copy": false, 00:22:18.820 "nvme_iov_md": false 00:22:18.820 }, 00:22:18.820 "memory_domains": [ 00:22:18.820 { 00:22:18.820 "dma_device_id": "system", 00:22:18.820 "dma_device_type": 1 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.820 "dma_device_type": 2 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "dma_device_id": "system", 00:22:18.820 "dma_device_type": 1 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.820 "dma_device_type": 2 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "dma_device_id": "system", 00:22:18.820 "dma_device_type": 1 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.820 "dma_device_type": 2 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "dma_device_id": "system", 00:22:18.820 "dma_device_type": 1 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.820 "dma_device_type": 2 00:22:18.820 } 00:22:18.820 ], 00:22:18.820 "driver_specific": { 00:22:18.820 "raid": { 00:22:18.820 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:18.820 "strip_size_kb": 0, 00:22:18.820 "state": "online", 00:22:18.820 "raid_level": "raid1", 00:22:18.820 "superblock": true, 00:22:18.820 "num_base_bdevs": 4, 00:22:18.820 "num_base_bdevs_discovered": 4, 00:22:18.820 "num_base_bdevs_operational": 4, 00:22:18.820 "base_bdevs_list": [ 00:22:18.820 { 00:22:18.820 "name": "pt1", 00:22:18.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.820 "is_configured": true, 00:22:18.820 "data_offset": 2048, 00:22:18.820 "data_size": 63488 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "name": "pt2", 00:22:18.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.820 "is_configured": true, 00:22:18.820 "data_offset": 2048, 00:22:18.820 "data_size": 63488 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "name": "pt3", 00:22:18.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.820 "is_configured": true, 00:22:18.820 "data_offset": 2048, 00:22:18.820 "data_size": 63488 00:22:18.820 }, 00:22:18.820 { 00:22:18.820 "name": "pt4", 00:22:18.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.820 "is_configured": true, 00:22:18.820 "data_offset": 2048, 00:22:18.820 "data_size": 63488 00:22:18.820 } 00:22:18.820 ] 00:22:18.820 } 00:22:18.820 } 00:22:18.820 }' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:18.820 pt2 00:22:18.820 pt3 00:22:18.820 pt4' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:18.820 [2024-11-08 17:09:55.514316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.820 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 82f1cf11-675d-4557-9c82-006adea0454e '!=' 82f1cf11-675d-4557-9c82-006adea0454e ']' 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.083 [2024-11-08 17:09:55.546027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.083 "name": "raid_bdev1", 00:22:19.083 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:19.083 "strip_size_kb": 0, 00:22:19.083 "state": "online", 00:22:19.083 "raid_level": "raid1", 00:22:19.083 "superblock": true, 00:22:19.083 "num_base_bdevs": 4, 00:22:19.083 "num_base_bdevs_discovered": 3, 00:22:19.083 "num_base_bdevs_operational": 3, 00:22:19.083 "base_bdevs_list": [ 00:22:19.083 { 00:22:19.083 "name": null, 00:22:19.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.083 "is_configured": false, 00:22:19.083 "data_offset": 0, 00:22:19.083 "data_size": 63488 00:22:19.083 }, 00:22:19.083 { 00:22:19.083 "name": "pt2", 00:22:19.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.083 "is_configured": true, 00:22:19.083 "data_offset": 2048, 00:22:19.083 "data_size": 63488 00:22:19.083 }, 00:22:19.083 { 00:22:19.083 "name": "pt3", 00:22:19.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.083 "is_configured": true, 00:22:19.083 "data_offset": 2048, 00:22:19.083 "data_size": 63488 00:22:19.083 }, 00:22:19.083 { 00:22:19.083 "name": "pt4", 00:22:19.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.083 "is_configured": true, 00:22:19.083 "data_offset": 2048, 00:22:19.083 "data_size": 63488 00:22:19.083 } 00:22:19.083 ] 00:22:19.083 }' 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.083 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.345 [2024-11-08 17:09:55.894077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:19.345 [2024-11-08 17:09:55.894136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:19.345 [2024-11-08 17:09:55.894259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.345 [2024-11-08 17:09:55.894371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:19.345 [2024-11-08 17:09:55.894382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:19.345 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.346 [2024-11-08 17:09:55.962062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:19.346 [2024-11-08 17:09:55.962152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.346 [2024-11-08 17:09:55.962175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:19.346 [2024-11-08 17:09:55.962186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.346 [2024-11-08 17:09:55.965111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.346 [2024-11-08 17:09:55.965169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:19.346 [2024-11-08 17:09:55.965285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:19.346 [2024-11-08 17:09:55.965345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:19.346 pt2 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.346 17:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.346 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.346 "name": "raid_bdev1", 00:22:19.346 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:19.346 "strip_size_kb": 0, 00:22:19.346 "state": "configuring", 00:22:19.346 "raid_level": "raid1", 00:22:19.346 "superblock": true, 00:22:19.346 "num_base_bdevs": 4, 00:22:19.346 "num_base_bdevs_discovered": 1, 00:22:19.346 "num_base_bdevs_operational": 3, 00:22:19.346 "base_bdevs_list": [ 00:22:19.346 { 00:22:19.346 "name": null, 00:22:19.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.346 "is_configured": false, 00:22:19.346 "data_offset": 2048, 00:22:19.346 "data_size": 63488 00:22:19.346 }, 00:22:19.346 { 00:22:19.346 "name": "pt2", 00:22:19.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.346 "is_configured": true, 00:22:19.346 "data_offset": 2048, 00:22:19.346 "data_size": 63488 00:22:19.346 }, 00:22:19.346 { 00:22:19.346 "name": null, 00:22:19.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.346 "is_configured": false, 00:22:19.346 "data_offset": 2048, 00:22:19.346 "data_size": 63488 00:22:19.346 }, 00:22:19.346 { 00:22:19.346 "name": null, 00:22:19.346 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.346 "is_configured": false, 00:22:19.346 "data_offset": 2048, 00:22:19.346 "data_size": 63488 00:22:19.346 } 00:22:19.346 ] 00:22:19.346 }' 00:22:19.346 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.346 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.609 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:19.609 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:19.609 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:19.609 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.609 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.609 [2024-11-08 17:09:56.318552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:19.609 [2024-11-08 17:09:56.318659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.609 [2024-11-08 17:09:56.318690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:19.609 [2024-11-08 17:09:56.318701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.609 [2024-11-08 17:09:56.319351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.609 [2024-11-08 17:09:56.319378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:19.609 [2024-11-08 17:09:56.319501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:19.609 [2024-11-08 17:09:56.319530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:19.870 pt3 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.870 "name": "raid_bdev1", 00:22:19.870 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:19.870 "strip_size_kb": 0, 00:22:19.870 "state": "configuring", 00:22:19.870 "raid_level": "raid1", 00:22:19.870 "superblock": true, 00:22:19.870 "num_base_bdevs": 4, 00:22:19.870 "num_base_bdevs_discovered": 2, 00:22:19.870 "num_base_bdevs_operational": 3, 00:22:19.870 "base_bdevs_list": [ 00:22:19.870 { 00:22:19.870 "name": null, 00:22:19.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.870 "is_configured": false, 00:22:19.870 "data_offset": 2048, 00:22:19.870 "data_size": 63488 00:22:19.870 }, 00:22:19.870 { 00:22:19.870 "name": "pt2", 00:22:19.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.870 "is_configured": true, 00:22:19.870 "data_offset": 2048, 00:22:19.870 "data_size": 63488 00:22:19.870 }, 00:22:19.870 { 00:22:19.870 "name": "pt3", 00:22:19.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.870 "is_configured": true, 00:22:19.870 "data_offset": 2048, 00:22:19.870 "data_size": 63488 00:22:19.870 }, 00:22:19.870 { 00:22:19.870 "name": null, 00:22:19.870 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.870 "is_configured": false, 00:22:19.870 "data_offset": 2048, 00:22:19.870 "data_size": 63488 00:22:19.870 } 00:22:19.870 ] 00:22:19.870 }' 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.870 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.132 [2024-11-08 17:09:56.634628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:20.132 [2024-11-08 17:09:56.634732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.132 [2024-11-08 17:09:56.634777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:20.132 [2024-11-08 17:09:56.634790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.132 [2024-11-08 17:09:56.635441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.132 [2024-11-08 17:09:56.635469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:20.132 [2024-11-08 17:09:56.635584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:20.132 [2024-11-08 17:09:56.635619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:20.132 [2024-11-08 17:09:56.635812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:20.132 [2024-11-08 17:09:56.635823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:20.132 [2024-11-08 17:09:56.636139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:20.132 [2024-11-08 17:09:56.636312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:20.132 [2024-11-08 17:09:56.636324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:20.132 [2024-11-08 17:09:56.636487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.132 pt4 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.132 "name": "raid_bdev1", 00:22:20.132 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:20.132 "strip_size_kb": 0, 00:22:20.132 "state": "online", 00:22:20.132 "raid_level": "raid1", 00:22:20.132 "superblock": true, 00:22:20.132 "num_base_bdevs": 4, 00:22:20.132 "num_base_bdevs_discovered": 3, 00:22:20.132 "num_base_bdevs_operational": 3, 00:22:20.132 "base_bdevs_list": [ 00:22:20.132 { 00:22:20.132 "name": null, 00:22:20.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.132 "is_configured": false, 00:22:20.132 "data_offset": 2048, 00:22:20.132 "data_size": 63488 00:22:20.132 }, 00:22:20.132 { 00:22:20.132 "name": "pt2", 00:22:20.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.132 "is_configured": true, 00:22:20.132 "data_offset": 2048, 00:22:20.132 "data_size": 63488 00:22:20.132 }, 00:22:20.132 { 00:22:20.132 "name": "pt3", 00:22:20.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.132 "is_configured": true, 00:22:20.132 "data_offset": 2048, 00:22:20.132 "data_size": 63488 00:22:20.132 }, 00:22:20.132 { 00:22:20.132 "name": "pt4", 00:22:20.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:20.132 "is_configured": true, 00:22:20.132 "data_offset": 2048, 00:22:20.132 "data_size": 63488 00:22:20.132 } 00:22:20.132 ] 00:22:20.132 }' 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.132 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.394 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.395 [2024-11-08 17:09:56.966658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.395 [2024-11-08 17:09:56.966715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.395 [2024-11-08 17:09:56.966859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.395 [2024-11-08 17:09:56.966973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.395 [2024-11-08 17:09:56.966989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:20.395 17:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.395 [2024-11-08 17:09:57.018730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:20.395 [2024-11-08 17:09:57.018861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.395 [2024-11-08 17:09:57.018888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:20.395 [2024-11-08 17:09:57.018903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.395 [2024-11-08 17:09:57.022109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.395 [2024-11-08 17:09:57.022187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:20.395 [2024-11-08 17:09:57.022334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:20.395 [2024-11-08 17:09:57.022400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:20.395 [2024-11-08 17:09:57.022564] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:20.395 [2024-11-08 17:09:57.022580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.395 [2024-11-08 17:09:57.022600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:20.395 [2024-11-08 17:09:57.022677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.395 [2024-11-08 17:09:57.022895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:20.395 pt1 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.395 "name": "raid_bdev1", 00:22:20.395 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:20.395 "strip_size_kb": 0, 00:22:20.395 "state": "configuring", 00:22:20.395 "raid_level": "raid1", 00:22:20.395 "superblock": true, 00:22:20.395 "num_base_bdevs": 4, 00:22:20.395 "num_base_bdevs_discovered": 2, 00:22:20.395 "num_base_bdevs_operational": 3, 00:22:20.395 "base_bdevs_list": [ 00:22:20.395 { 00:22:20.395 "name": null, 00:22:20.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.395 "is_configured": false, 00:22:20.395 "data_offset": 2048, 00:22:20.395 "data_size": 63488 00:22:20.395 }, 00:22:20.395 { 00:22:20.395 "name": "pt2", 00:22:20.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.395 "is_configured": true, 00:22:20.395 "data_offset": 2048, 00:22:20.395 "data_size": 63488 00:22:20.395 }, 00:22:20.395 { 00:22:20.395 "name": "pt3", 00:22:20.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.395 "is_configured": true, 00:22:20.395 "data_offset": 2048, 00:22:20.395 "data_size": 63488 00:22:20.395 }, 00:22:20.395 { 00:22:20.395 "name": null, 00:22:20.395 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:20.395 "is_configured": false, 00:22:20.395 "data_offset": 2048, 00:22:20.395 "data_size": 63488 00:22:20.395 } 00:22:20.395 ] 00:22:20.395 }' 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.395 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.969 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.969 [2024-11-08 17:09:57.422887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:20.969 [2024-11-08 17:09:57.423009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.969 [2024-11-08 17:09:57.423042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:20.969 [2024-11-08 17:09:57.423054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.969 [2024-11-08 17:09:57.423717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.969 [2024-11-08 17:09:57.423750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:20.970 [2024-11-08 17:09:57.423901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:20.970 [2024-11-08 17:09:57.423940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:20.970 [2024-11-08 17:09:57.424113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:20.970 [2024-11-08 17:09:57.424124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:20.970 [2024-11-08 17:09:57.424448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:20.970 [2024-11-08 17:09:57.424615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:20.970 [2024-11-08 17:09:57.424628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:20.970 [2024-11-08 17:09:57.424825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.970 pt4 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.970 "name": "raid_bdev1", 00:22:20.970 "uuid": "82f1cf11-675d-4557-9c82-006adea0454e", 00:22:20.970 "strip_size_kb": 0, 00:22:20.970 "state": "online", 00:22:20.970 "raid_level": "raid1", 00:22:20.970 "superblock": true, 00:22:20.970 "num_base_bdevs": 4, 00:22:20.970 "num_base_bdevs_discovered": 3, 00:22:20.970 "num_base_bdevs_operational": 3, 00:22:20.970 "base_bdevs_list": [ 00:22:20.970 { 00:22:20.970 "name": null, 00:22:20.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.970 "is_configured": false, 00:22:20.970 "data_offset": 2048, 00:22:20.970 "data_size": 63488 00:22:20.970 }, 00:22:20.970 { 00:22:20.970 "name": "pt2", 00:22:20.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.970 "is_configured": true, 00:22:20.970 "data_offset": 2048, 00:22:20.970 "data_size": 63488 00:22:20.970 }, 00:22:20.970 { 00:22:20.970 "name": "pt3", 00:22:20.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.970 "is_configured": true, 00:22:20.970 "data_offset": 2048, 00:22:20.970 "data_size": 63488 00:22:20.970 }, 00:22:20.970 { 00:22:20.970 "name": "pt4", 00:22:20.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:20.970 "is_configured": true, 00:22:20.970 "data_offset": 2048, 00:22:20.970 "data_size": 63488 00:22:20.970 } 00:22:20.970 ] 00:22:20.970 }' 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.970 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.232 [2024-11-08 17:09:57.783302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 82f1cf11-675d-4557-9c82-006adea0454e '!=' 82f1cf11-675d-4557-9c82-006adea0454e ']' 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72940 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72940 ']' 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72940 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72940 00:22:21.232 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:21.233 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:21.233 killing process with pid 72940 00:22:21.233 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72940' 00:22:21.233 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72940 00:22:21.233 17:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72940 00:22:21.233 [2024-11-08 17:09:57.831725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:21.233 [2024-11-08 17:09:57.831888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.233 [2024-11-08 17:09:57.832004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:21.233 [2024-11-08 17:09:57.832020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:21.495 [2024-11-08 17:09:58.135067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:22.439 17:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:22.439 00:22:22.439 real 0m6.709s 00:22:22.439 user 0m10.145s 00:22:22.439 sys 0m1.307s 00:22:22.439 ************************************ 00:22:22.439 END TEST raid_superblock_test 00:22:22.439 ************************************ 00:22:22.439 17:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:22.439 17:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.439 17:09:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:22:22.439 17:09:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:22.439 17:09:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:22.439 17:09:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.439 ************************************ 00:22:22.439 START TEST raid_read_error_test 00:22:22.440 ************************************ 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.30QB4LI6KP 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73411 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73411 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73411 ']' 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:22.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 17:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:22.701 [2024-11-08 17:09:59.213243] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:22:22.701 [2024-11-08 17:09:59.213498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73411 ] 00:22:22.701 [2024-11-08 17:09:59.395457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.964 [2024-11-08 17:09:59.561604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.225 [2024-11-08 17:09:59.743838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.225 [2024-11-08 17:09:59.743926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.485 BaseBdev1_malloc 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.485 true 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.485 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.486 [2024-11-08 17:10:00.128306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:23.486 [2024-11-08 17:10:00.128413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.486 [2024-11-08 17:10:00.128449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:23.486 [2024-11-08 17:10:00.128466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.486 [2024-11-08 17:10:00.131427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.486 [2024-11-08 17:10:00.131495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:23.486 BaseBdev1 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.486 BaseBdev2_malloc 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.486 true 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.486 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.486 [2024-11-08 17:10:00.193827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:23.486 [2024-11-08 17:10:00.193916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.486 [2024-11-08 17:10:00.193940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:23.486 [2024-11-08 17:10:00.193953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.486 [2024-11-08 17:10:00.196704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.486 [2024-11-08 17:10:00.196788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:23.746 BaseBdev2 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 BaseBdev3_malloc 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 true 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 [2024-11-08 17:10:00.266046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:23.746 [2024-11-08 17:10:00.266142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.746 [2024-11-08 17:10:00.266169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:23.746 [2024-11-08 17:10:00.266182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.746 [2024-11-08 17:10:00.268972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.746 [2024-11-08 17:10:00.269030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:23.746 BaseBdev3 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 BaseBdev4_malloc 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 true 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 [2024-11-08 17:10:00.322868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:23.746 [2024-11-08 17:10:00.322950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.746 [2024-11-08 17:10:00.322975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:23.746 [2024-11-08 17:10:00.322988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.746 [2024-11-08 17:10:00.325705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.746 [2024-11-08 17:10:00.325787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:23.746 BaseBdev4 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 [2024-11-08 17:10:00.330961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:23.746 [2024-11-08 17:10:00.333314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:23.746 [2024-11-08 17:10:00.333421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:23.746 [2024-11-08 17:10:00.333498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:23.746 [2024-11-08 17:10:00.333996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:23.746 [2024-11-08 17:10:00.334032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:23.746 [2024-11-08 17:10:00.334345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:23.746 [2024-11-08 17:10:00.334539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:23.746 [2024-11-08 17:10:00.334548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:23.746 [2024-11-08 17:10:00.334719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.746 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.746 "name": "raid_bdev1", 00:22:23.746 "uuid": "a3769c9a-5401-45e4-88d8-d53fdec8492b", 00:22:23.746 "strip_size_kb": 0, 00:22:23.746 "state": "online", 00:22:23.746 "raid_level": "raid1", 00:22:23.746 "superblock": true, 00:22:23.746 "num_base_bdevs": 4, 00:22:23.746 "num_base_bdevs_discovered": 4, 00:22:23.746 "num_base_bdevs_operational": 4, 00:22:23.746 "base_bdevs_list": [ 00:22:23.746 { 00:22:23.746 "name": "BaseBdev1", 00:22:23.747 "uuid": "e878f8df-67ef-541f-992a-e2f17885647a", 00:22:23.747 "is_configured": true, 00:22:23.747 "data_offset": 2048, 00:22:23.747 "data_size": 63488 00:22:23.747 }, 00:22:23.747 { 00:22:23.747 "name": "BaseBdev2", 00:22:23.747 "uuid": "6c1f152e-3afd-5fbc-8a81-e095f6c83352", 00:22:23.747 "is_configured": true, 00:22:23.747 "data_offset": 2048, 00:22:23.747 "data_size": 63488 00:22:23.747 }, 00:22:23.747 { 00:22:23.747 "name": "BaseBdev3", 00:22:23.747 "uuid": "6d98bd36-e3e3-59d5-a21b-e25686c4fd61", 00:22:23.747 "is_configured": true, 00:22:23.747 "data_offset": 2048, 00:22:23.747 "data_size": 63488 00:22:23.747 }, 00:22:23.747 { 00:22:23.747 "name": "BaseBdev4", 00:22:23.747 "uuid": "034a6fc8-b167-5453-8ebf-6afb89ab22e6", 00:22:23.747 "is_configured": true, 00:22:23.747 "data_offset": 2048, 00:22:23.747 "data_size": 63488 00:22:23.747 } 00:22:23.747 ] 00:22:23.747 }' 00:22:23.747 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.747 17:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.006 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:24.006 17:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:24.267 [2024-11-08 17:10:00.764425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.210 "name": "raid_bdev1", 00:22:25.210 "uuid": "a3769c9a-5401-45e4-88d8-d53fdec8492b", 00:22:25.210 "strip_size_kb": 0, 00:22:25.210 "state": "online", 00:22:25.210 "raid_level": "raid1", 00:22:25.210 "superblock": true, 00:22:25.210 "num_base_bdevs": 4, 00:22:25.210 "num_base_bdevs_discovered": 4, 00:22:25.210 "num_base_bdevs_operational": 4, 00:22:25.210 "base_bdevs_list": [ 00:22:25.210 { 00:22:25.210 "name": "BaseBdev1", 00:22:25.210 "uuid": "e878f8df-67ef-541f-992a-e2f17885647a", 00:22:25.210 "is_configured": true, 00:22:25.210 "data_offset": 2048, 00:22:25.210 "data_size": 63488 00:22:25.210 }, 00:22:25.210 { 00:22:25.210 "name": "BaseBdev2", 00:22:25.210 "uuid": "6c1f152e-3afd-5fbc-8a81-e095f6c83352", 00:22:25.210 "is_configured": true, 00:22:25.210 "data_offset": 2048, 00:22:25.210 "data_size": 63488 00:22:25.210 }, 00:22:25.210 { 00:22:25.210 "name": "BaseBdev3", 00:22:25.210 "uuid": "6d98bd36-e3e3-59d5-a21b-e25686c4fd61", 00:22:25.210 "is_configured": true, 00:22:25.210 "data_offset": 2048, 00:22:25.210 "data_size": 63488 00:22:25.210 }, 00:22:25.210 { 00:22:25.210 "name": "BaseBdev4", 00:22:25.210 "uuid": "034a6fc8-b167-5453-8ebf-6afb89ab22e6", 00:22:25.210 "is_configured": true, 00:22:25.210 "data_offset": 2048, 00:22:25.210 "data_size": 63488 00:22:25.210 } 00:22:25.210 ] 00:22:25.210 }' 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.210 17:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.471 [2024-11-08 17:10:02.015962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:25.471 [2024-11-08 17:10:02.016018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:25.471 [2024-11-08 17:10:02.019425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.471 [2024-11-08 17:10:02.019517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.471 [2024-11-08 17:10:02.019682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.471 [2024-11-08 17:10:02.019699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:25.471 { 00:22:25.471 "results": [ 00:22:25.471 { 00:22:25.471 "job": "raid_bdev1", 00:22:25.471 "core_mask": "0x1", 00:22:25.471 "workload": "randrw", 00:22:25.471 "percentage": 50, 00:22:25.471 "status": "finished", 00:22:25.471 "queue_depth": 1, 00:22:25.471 "io_size": 131072, 00:22:25.471 "runtime": 1.249074, 00:22:25.471 "iops": 7596.027136903018, 00:22:25.471 "mibps": 949.5033921128772, 00:22:25.471 "io_failed": 0, 00:22:25.471 "io_timeout": 0, 00:22:25.471 "avg_latency_us": 128.40190945647944, 00:22:25.471 "min_latency_us": 29.53846153846154, 00:22:25.471 "max_latency_us": 1865.2553846153846 00:22:25.471 } 00:22:25.471 ], 00:22:25.471 "core_count": 1 00:22:25.471 } 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73411 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73411 ']' 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73411 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73411 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:25.471 killing process with pid 73411 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73411' 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73411 00:22:25.471 [2024-11-08 17:10:02.056886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.471 17:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73411 00:22:25.733 [2024-11-08 17:10:02.304692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.30QB4LI6KP 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:26.675 00:22:26.675 real 0m4.167s 00:22:26.675 user 0m4.653s 00:22:26.675 sys 0m0.693s 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.675 ************************************ 00:22:26.675 END TEST raid_read_error_test 00:22:26.675 ************************************ 00:22:26.675 17:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.675 17:10:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:22:26.675 17:10:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:26.675 17:10:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.675 17:10:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:26.675 ************************************ 00:22:26.675 START TEST raid_write_error_test 00:22:26.675 ************************************ 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7coLQl0Zcj 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73551 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73551 00:22:26.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73551 ']' 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:26.675 17:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.676 17:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:26.676 17:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.676 17:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:26.937 [2024-11-08 17:10:03.421509] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:22:26.937 [2024-11-08 17:10:03.421742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73551 ] 00:22:26.937 [2024-11-08 17:10:03.592690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.200 [2024-11-08 17:10:03.758590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.460 [2024-11-08 17:10:03.940483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.460 [2024-11-08 17:10:03.940589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.720 BaseBdev1_malloc 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.720 true 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.720 [2024-11-08 17:10:04.353186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:27.720 [2024-11-08 17:10:04.353279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.720 [2024-11-08 17:10:04.353307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:27.720 [2024-11-08 17:10:04.353321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.720 [2024-11-08 17:10:04.356179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.720 [2024-11-08 17:10:04.356250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:27.720 BaseBdev1 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.720 BaseBdev2_malloc 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.720 true 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.720 [2024-11-08 17:10:04.406670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:27.720 [2024-11-08 17:10:04.406774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.720 [2024-11-08 17:10:04.406799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:27.720 [2024-11-08 17:10:04.406812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.720 [2024-11-08 17:10:04.409627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.720 [2024-11-08 17:10:04.409687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:27.720 BaseBdev2 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.720 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.980 BaseBdev3_malloc 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.980 true 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.980 [2024-11-08 17:10:04.485280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:27.980 [2024-11-08 17:10:04.485377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.980 [2024-11-08 17:10:04.485405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:27.980 [2024-11-08 17:10:04.485419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.980 [2024-11-08 17:10:04.488290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.980 [2024-11-08 17:10:04.488354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:27.980 BaseBdev3 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.980 BaseBdev4_malloc 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.980 true 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.980 [2024-11-08 17:10:04.538642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:27.980 [2024-11-08 17:10:04.538732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.980 [2024-11-08 17:10:04.538774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:27.980 [2024-11-08 17:10:04.538788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.980 [2024-11-08 17:10:04.541493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.980 [2024-11-08 17:10:04.541574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:27.980 BaseBdev4 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.980 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.981 [2024-11-08 17:10:04.550745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:27.981 [2024-11-08 17:10:04.553200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:27.981 [2024-11-08 17:10:04.553311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:27.981 [2024-11-08 17:10:04.553391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:27.981 [2024-11-08 17:10:04.553682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:27.981 [2024-11-08 17:10:04.553698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:27.981 [2024-11-08 17:10:04.554072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:27.981 [2024-11-08 17:10:04.554287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:27.981 [2024-11-08 17:10:04.554298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:27.981 [2024-11-08 17:10:04.554481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.981 "name": "raid_bdev1", 00:22:27.981 "uuid": "8730a92e-fbe7-4d91-9201-ca1898e97d6d", 00:22:27.981 "strip_size_kb": 0, 00:22:27.981 "state": "online", 00:22:27.981 "raid_level": "raid1", 00:22:27.981 "superblock": true, 00:22:27.981 "num_base_bdevs": 4, 00:22:27.981 "num_base_bdevs_discovered": 4, 00:22:27.981 "num_base_bdevs_operational": 4, 00:22:27.981 "base_bdevs_list": [ 00:22:27.981 { 00:22:27.981 "name": "BaseBdev1", 00:22:27.981 "uuid": "3939844c-038e-5fc0-ad7b-c9e19ecf451f", 00:22:27.981 "is_configured": true, 00:22:27.981 "data_offset": 2048, 00:22:27.981 "data_size": 63488 00:22:27.981 }, 00:22:27.981 { 00:22:27.981 "name": "BaseBdev2", 00:22:27.981 "uuid": "f82855d1-6d0f-5f76-a11b-603bc1ab1a71", 00:22:27.981 "is_configured": true, 00:22:27.981 "data_offset": 2048, 00:22:27.981 "data_size": 63488 00:22:27.981 }, 00:22:27.981 { 00:22:27.981 "name": "BaseBdev3", 00:22:27.981 "uuid": "c00dff3c-bed9-5107-890c-977d643cb141", 00:22:27.981 "is_configured": true, 00:22:27.981 "data_offset": 2048, 00:22:27.981 "data_size": 63488 00:22:27.981 }, 00:22:27.981 { 00:22:27.981 "name": "BaseBdev4", 00:22:27.981 "uuid": "60f4ffbc-a2d8-57de-876e-f73be1b13aba", 00:22:27.981 "is_configured": true, 00:22:27.981 "data_offset": 2048, 00:22:27.981 "data_size": 63488 00:22:27.981 } 00:22:27.981 ] 00:22:27.981 }' 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.981 17:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.242 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:28.242 17:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:28.502 [2024-11-08 17:10:04.968089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.472 [2024-11-08 17:10:05.878074] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:29.472 [2024-11-08 17:10:05.878176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:29.472 [2024-11-08 17:10:05.878478] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.472 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.472 "name": "raid_bdev1", 00:22:29.472 "uuid": "8730a92e-fbe7-4d91-9201-ca1898e97d6d", 00:22:29.472 "strip_size_kb": 0, 00:22:29.472 "state": "online", 00:22:29.472 "raid_level": "raid1", 00:22:29.472 "superblock": true, 00:22:29.472 "num_base_bdevs": 4, 00:22:29.472 "num_base_bdevs_discovered": 3, 00:22:29.472 "num_base_bdevs_operational": 3, 00:22:29.472 "base_bdevs_list": [ 00:22:29.472 { 00:22:29.472 "name": null, 00:22:29.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.472 "is_configured": false, 00:22:29.472 "data_offset": 0, 00:22:29.472 "data_size": 63488 00:22:29.472 }, 00:22:29.472 { 00:22:29.472 "name": "BaseBdev2", 00:22:29.472 "uuid": "f82855d1-6d0f-5f76-a11b-603bc1ab1a71", 00:22:29.472 "is_configured": true, 00:22:29.472 "data_offset": 2048, 00:22:29.472 "data_size": 63488 00:22:29.472 }, 00:22:29.472 { 00:22:29.472 "name": "BaseBdev3", 00:22:29.472 "uuid": "c00dff3c-bed9-5107-890c-977d643cb141", 00:22:29.472 "is_configured": true, 00:22:29.472 "data_offset": 2048, 00:22:29.472 "data_size": 63488 00:22:29.472 }, 00:22:29.472 { 00:22:29.473 "name": "BaseBdev4", 00:22:29.473 "uuid": "60f4ffbc-a2d8-57de-876e-f73be1b13aba", 00:22:29.473 "is_configured": true, 00:22:29.473 "data_offset": 2048, 00:22:29.473 "data_size": 63488 00:22:29.473 } 00:22:29.473 ] 00:22:29.473 }' 00:22:29.473 17:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.473 17:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.735 [2024-11-08 17:10:06.207129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.735 [2024-11-08 17:10:06.207187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.735 [2024-11-08 17:10:06.210605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.735 [2024-11-08 17:10:06.210680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.735 [2024-11-08 17:10:06.210835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.735 [2024-11-08 17:10:06.210852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:29.735 { 00:22:29.735 "results": [ 00:22:29.735 { 00:22:29.735 "job": "raid_bdev1", 00:22:29.735 "core_mask": "0x1", 00:22:29.735 "workload": "randrw", 00:22:29.735 "percentage": 50, 00:22:29.735 "status": "finished", 00:22:29.735 "queue_depth": 1, 00:22:29.735 "io_size": 131072, 00:22:29.735 "runtime": 1.236449, 00:22:29.735 "iops": 7669.543992514046, 00:22:29.735 "mibps": 958.6929990642558, 00:22:29.735 "io_failed": 0, 00:22:29.735 "io_timeout": 0, 00:22:29.735 "avg_latency_us": 127.07958241062956, 00:22:29.735 "min_latency_us": 29.53846153846154, 00:22:29.735 "max_latency_us": 1714.0184615384615 00:22:29.735 } 00:22:29.735 ], 00:22:29.735 "core_count": 1 00:22:29.735 } 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73551 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73551 ']' 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73551 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73551 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:29.735 killing process with pid 73551 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73551' 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73551 00:22:29.735 17:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73551 00:22:29.735 [2024-11-08 17:10:06.243823] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:29.995 [2024-11-08 17:10:06.491021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7coLQl0Zcj 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:30.943 ************************************ 00:22:30.943 END TEST raid_write_error_test 00:22:30.943 ************************************ 00:22:30.943 00:22:30.943 real 0m4.102s 00:22:30.943 user 0m4.635s 00:22:30.943 sys 0m0.603s 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:30.943 17:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.943 17:10:07 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:22:30.943 17:10:07 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:22:30.943 17:10:07 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:22:30.943 17:10:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:30.943 17:10:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:30.943 17:10:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:30.943 ************************************ 00:22:30.943 START TEST raid_rebuild_test 00:22:30.943 ************************************ 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73689 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73689 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 73689 ']' 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:30.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:30.943 17:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.944 17:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:30.944 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:30.944 Zero copy mechanism will not be used. 00:22:30.944 [2024-11-08 17:10:07.590271] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:22:30.944 [2024-11-08 17:10:07.590473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73689 ] 00:22:31.205 [2024-11-08 17:10:07.758495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.465 [2024-11-08 17:10:07.930401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.465 [2024-11-08 17:10:08.114956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:31.465 [2024-11-08 17:10:08.115037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.039 BaseBdev1_malloc 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.039 [2024-11-08 17:10:08.544442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:32.039 [2024-11-08 17:10:08.544561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.039 [2024-11-08 17:10:08.544594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:32.039 [2024-11-08 17:10:08.544609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.039 [2024-11-08 17:10:08.547474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.039 [2024-11-08 17:10:08.547544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:32.039 BaseBdev1 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.039 BaseBdev2_malloc 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.039 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 [2024-11-08 17:10:08.593914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:32.040 [2024-11-08 17:10:08.594030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.040 [2024-11-08 17:10:08.594061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:32.040 [2024-11-08 17:10:08.594076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.040 [2024-11-08 17:10:08.596926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.040 [2024-11-08 17:10:08.597170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:32.040 BaseBdev2 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 spare_malloc 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 spare_delay 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 [2024-11-08 17:10:08.663578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:32.040 [2024-11-08 17:10:08.663875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.040 [2024-11-08 17:10:08.663915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:32.040 [2024-11-08 17:10:08.663930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.040 [2024-11-08 17:10:08.666791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.040 [2024-11-08 17:10:08.666856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:32.040 spare 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 [2024-11-08 17:10:08.671806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.040 [2024-11-08 17:10:08.674188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.040 [2024-11-08 17:10:08.674490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:32.040 [2024-11-08 17:10:08.674518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:32.040 [2024-11-08 17:10:08.674890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:32.040 [2024-11-08 17:10:08.675085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:32.040 [2024-11-08 17:10:08.675096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:32.040 [2024-11-08 17:10:08.675287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.040 "name": "raid_bdev1", 00:22:32.040 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:32.040 "strip_size_kb": 0, 00:22:32.040 "state": "online", 00:22:32.040 "raid_level": "raid1", 00:22:32.040 "superblock": false, 00:22:32.040 "num_base_bdevs": 2, 00:22:32.040 "num_base_bdevs_discovered": 2, 00:22:32.040 "num_base_bdevs_operational": 2, 00:22:32.040 "base_bdevs_list": [ 00:22:32.040 { 00:22:32.040 "name": "BaseBdev1", 00:22:32.040 "uuid": "81797974-61c2-5870-8c5d-648b20188b83", 00:22:32.040 "is_configured": true, 00:22:32.040 "data_offset": 0, 00:22:32.040 "data_size": 65536 00:22:32.040 }, 00:22:32.040 { 00:22:32.040 "name": "BaseBdev2", 00:22:32.040 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:32.040 "is_configured": true, 00:22:32.040 "data_offset": 0, 00:22:32.040 "data_size": 65536 00:22:32.040 } 00:22:32.040 ] 00:22:32.040 }' 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.040 17:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.301 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:32.301 17:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:32.301 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.301 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.301 [2024-11-08 17:10:09.008295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:32.562 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:32.823 [2024-11-08 17:10:09.284014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:32.823 /dev/nbd0 00:22:32.823 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:32.823 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:32.823 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:32.824 1+0 records in 00:22:32.824 1+0 records out 00:22:32.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103144 s, 4.0 MB/s 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:32.824 17:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:47.813 65536+0 records in 00:22:47.813 65536+0 records out 00:22:47.813 33554432 bytes (34 MB, 32 MiB) copied, 13.1838 s, 2.5 MB/s 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:47.813 [2024-11-08 17:10:22.734669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.813 [2024-11-08 17:10:22.764838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.813 "name": "raid_bdev1", 00:22:47.813 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:47.813 "strip_size_kb": 0, 00:22:47.813 "state": "online", 00:22:47.813 "raid_level": "raid1", 00:22:47.813 "superblock": false, 00:22:47.813 "num_base_bdevs": 2, 00:22:47.813 "num_base_bdevs_discovered": 1, 00:22:47.813 "num_base_bdevs_operational": 1, 00:22:47.813 "base_bdevs_list": [ 00:22:47.813 { 00:22:47.813 "name": null, 00:22:47.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.813 "is_configured": false, 00:22:47.813 "data_offset": 0, 00:22:47.813 "data_size": 65536 00:22:47.813 }, 00:22:47.813 { 00:22:47.813 "name": "BaseBdev2", 00:22:47.813 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:47.813 "is_configured": true, 00:22:47.813 "data_offset": 0, 00:22:47.813 "data_size": 65536 00:22:47.813 } 00:22:47.813 ] 00:22:47.813 }' 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.813 17:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.813 17:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:47.813 17:10:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.813 17:10:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.813 [2024-11-08 17:10:23.120936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:47.813 [2024-11-08 17:10:23.133226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:22:47.813 17:10:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.813 17:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:47.813 [2024-11-08 17:10:23.135314] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.813 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:47.813 "name": "raid_bdev1", 00:22:47.813 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:47.813 "strip_size_kb": 0, 00:22:47.814 "state": "online", 00:22:47.814 "raid_level": "raid1", 00:22:47.814 "superblock": false, 00:22:47.814 "num_base_bdevs": 2, 00:22:47.814 "num_base_bdevs_discovered": 2, 00:22:47.814 "num_base_bdevs_operational": 2, 00:22:47.814 "process": { 00:22:47.814 "type": "rebuild", 00:22:47.814 "target": "spare", 00:22:47.814 "progress": { 00:22:47.814 "blocks": 20480, 00:22:47.814 "percent": 31 00:22:47.814 } 00:22:47.814 }, 00:22:47.814 "base_bdevs_list": [ 00:22:47.814 { 00:22:47.814 "name": "spare", 00:22:47.814 "uuid": "c00c2012-4a19-56d2-8558-8cd616fedfd3", 00:22:47.814 "is_configured": true, 00:22:47.814 "data_offset": 0, 00:22:47.814 "data_size": 65536 00:22:47.814 }, 00:22:47.814 { 00:22:47.814 "name": "BaseBdev2", 00:22:47.814 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:47.814 "is_configured": true, 00:22:47.814 "data_offset": 0, 00:22:47.814 "data_size": 65536 00:22:47.814 } 00:22:47.814 ] 00:22:47.814 }' 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.814 [2024-11-08 17:10:24.240708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:47.814 [2024-11-08 17:10:24.242424] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:47.814 [2024-11-08 17:10:24.242486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.814 [2024-11-08 17:10:24.242501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:47.814 [2024-11-08 17:10:24.242511] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.814 "name": "raid_bdev1", 00:22:47.814 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:47.814 "strip_size_kb": 0, 00:22:47.814 "state": "online", 00:22:47.814 "raid_level": "raid1", 00:22:47.814 "superblock": false, 00:22:47.814 "num_base_bdevs": 2, 00:22:47.814 "num_base_bdevs_discovered": 1, 00:22:47.814 "num_base_bdevs_operational": 1, 00:22:47.814 "base_bdevs_list": [ 00:22:47.814 { 00:22:47.814 "name": null, 00:22:47.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.814 "is_configured": false, 00:22:47.814 "data_offset": 0, 00:22:47.814 "data_size": 65536 00:22:47.814 }, 00:22:47.814 { 00:22:47.814 "name": "BaseBdev2", 00:22:47.814 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:47.814 "is_configured": true, 00:22:47.814 "data_offset": 0, 00:22:47.814 "data_size": 65536 00:22:47.814 } 00:22:47.814 ] 00:22:47.814 }' 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.814 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.072 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.073 "name": "raid_bdev1", 00:22:48.073 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:48.073 "strip_size_kb": 0, 00:22:48.073 "state": "online", 00:22:48.073 "raid_level": "raid1", 00:22:48.073 "superblock": false, 00:22:48.073 "num_base_bdevs": 2, 00:22:48.073 "num_base_bdevs_discovered": 1, 00:22:48.073 "num_base_bdevs_operational": 1, 00:22:48.073 "base_bdevs_list": [ 00:22:48.073 { 00:22:48.073 "name": null, 00:22:48.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.073 "is_configured": false, 00:22:48.073 "data_offset": 0, 00:22:48.073 "data_size": 65536 00:22:48.073 }, 00:22:48.073 { 00:22:48.073 "name": "BaseBdev2", 00:22:48.073 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:48.073 "is_configured": true, 00:22:48.073 "data_offset": 0, 00:22:48.073 "data_size": 65536 00:22:48.073 } 00:22:48.073 ] 00:22:48.073 }' 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.073 [2024-11-08 17:10:24.683171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:48.073 [2024-11-08 17:10:24.694766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.073 17:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:48.073 [2024-11-08 17:10:24.696800] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.006 17:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:49.299 "name": "raid_bdev1", 00:22:49.299 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:49.299 "strip_size_kb": 0, 00:22:49.299 "state": "online", 00:22:49.299 "raid_level": "raid1", 00:22:49.299 "superblock": false, 00:22:49.299 "num_base_bdevs": 2, 00:22:49.299 "num_base_bdevs_discovered": 2, 00:22:49.299 "num_base_bdevs_operational": 2, 00:22:49.299 "process": { 00:22:49.299 "type": "rebuild", 00:22:49.299 "target": "spare", 00:22:49.299 "progress": { 00:22:49.299 "blocks": 20480, 00:22:49.299 "percent": 31 00:22:49.299 } 00:22:49.299 }, 00:22:49.299 "base_bdevs_list": [ 00:22:49.299 { 00:22:49.299 "name": "spare", 00:22:49.299 "uuid": "c00c2012-4a19-56d2-8558-8cd616fedfd3", 00:22:49.299 "is_configured": true, 00:22:49.299 "data_offset": 0, 00:22:49.299 "data_size": 65536 00:22:49.299 }, 00:22:49.299 { 00:22:49.299 "name": "BaseBdev2", 00:22:49.299 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:49.299 "is_configured": true, 00:22:49.299 "data_offset": 0, 00:22:49.299 "data_size": 65536 00:22:49.299 } 00:22:49.299 ] 00:22:49.299 }' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=313 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:49.299 "name": "raid_bdev1", 00:22:49.299 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:49.299 "strip_size_kb": 0, 00:22:49.299 "state": "online", 00:22:49.299 "raid_level": "raid1", 00:22:49.299 "superblock": false, 00:22:49.299 "num_base_bdevs": 2, 00:22:49.299 "num_base_bdevs_discovered": 2, 00:22:49.299 "num_base_bdevs_operational": 2, 00:22:49.299 "process": { 00:22:49.299 "type": "rebuild", 00:22:49.299 "target": "spare", 00:22:49.299 "progress": { 00:22:49.299 "blocks": 20480, 00:22:49.299 "percent": 31 00:22:49.299 } 00:22:49.299 }, 00:22:49.299 "base_bdevs_list": [ 00:22:49.299 { 00:22:49.299 "name": "spare", 00:22:49.299 "uuid": "c00c2012-4a19-56d2-8558-8cd616fedfd3", 00:22:49.299 "is_configured": true, 00:22:49.299 "data_offset": 0, 00:22:49.299 "data_size": 65536 00:22:49.299 }, 00:22:49.299 { 00:22:49.299 "name": "BaseBdev2", 00:22:49.299 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:49.299 "is_configured": true, 00:22:49.299 "data_offset": 0, 00:22:49.299 "data_size": 65536 00:22:49.299 } 00:22:49.299 ] 00:22:49.299 }' 00:22:49.299 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:49.300 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.300 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:49.300 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.300 17:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:50.267 "name": "raid_bdev1", 00:22:50.267 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:50.267 "strip_size_kb": 0, 00:22:50.267 "state": "online", 00:22:50.267 "raid_level": "raid1", 00:22:50.267 "superblock": false, 00:22:50.267 "num_base_bdevs": 2, 00:22:50.267 "num_base_bdevs_discovered": 2, 00:22:50.267 "num_base_bdevs_operational": 2, 00:22:50.267 "process": { 00:22:50.267 "type": "rebuild", 00:22:50.267 "target": "spare", 00:22:50.267 "progress": { 00:22:50.267 "blocks": 43008, 00:22:50.267 "percent": 65 00:22:50.267 } 00:22:50.267 }, 00:22:50.267 "base_bdevs_list": [ 00:22:50.267 { 00:22:50.267 "name": "spare", 00:22:50.267 "uuid": "c00c2012-4a19-56d2-8558-8cd616fedfd3", 00:22:50.267 "is_configured": true, 00:22:50.267 "data_offset": 0, 00:22:50.267 "data_size": 65536 00:22:50.267 }, 00:22:50.267 { 00:22:50.267 "name": "BaseBdev2", 00:22:50.267 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:50.267 "is_configured": true, 00:22:50.267 "data_offset": 0, 00:22:50.267 "data_size": 65536 00:22:50.267 } 00:22:50.267 ] 00:22:50.267 }' 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.267 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:50.526 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.526 17:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:51.461 [2024-11-08 17:10:27.916945] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:51.461 [2024-11-08 17:10:27.917042] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:51.461 [2024-11-08 17:10:27.917102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.461 17:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.461 "name": "raid_bdev1", 00:22:51.461 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:51.461 "strip_size_kb": 0, 00:22:51.461 "state": "online", 00:22:51.461 "raid_level": "raid1", 00:22:51.461 "superblock": false, 00:22:51.461 "num_base_bdevs": 2, 00:22:51.461 "num_base_bdevs_discovered": 2, 00:22:51.461 "num_base_bdevs_operational": 2, 00:22:51.461 "base_bdevs_list": [ 00:22:51.461 { 00:22:51.461 "name": "spare", 00:22:51.461 "uuid": "c00c2012-4a19-56d2-8558-8cd616fedfd3", 00:22:51.461 "is_configured": true, 00:22:51.461 "data_offset": 0, 00:22:51.461 "data_size": 65536 00:22:51.461 }, 00:22:51.461 { 00:22:51.461 "name": "BaseBdev2", 00:22:51.461 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:51.461 "is_configured": true, 00:22:51.461 "data_offset": 0, 00:22:51.461 "data_size": 65536 00:22:51.461 } 00:22:51.461 ] 00:22:51.461 }' 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.461 "name": "raid_bdev1", 00:22:51.461 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:51.461 "strip_size_kb": 0, 00:22:51.461 "state": "online", 00:22:51.461 "raid_level": "raid1", 00:22:51.461 "superblock": false, 00:22:51.461 "num_base_bdevs": 2, 00:22:51.461 "num_base_bdevs_discovered": 2, 00:22:51.461 "num_base_bdevs_operational": 2, 00:22:51.461 "base_bdevs_list": [ 00:22:51.461 { 00:22:51.461 "name": "spare", 00:22:51.461 "uuid": "c00c2012-4a19-56d2-8558-8cd616fedfd3", 00:22:51.461 "is_configured": true, 00:22:51.461 "data_offset": 0, 00:22:51.461 "data_size": 65536 00:22:51.461 }, 00:22:51.461 { 00:22:51.461 "name": "BaseBdev2", 00:22:51.461 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:51.461 "is_configured": true, 00:22:51.461 "data_offset": 0, 00:22:51.461 "data_size": 65536 00:22:51.461 } 00:22:51.461 ] 00:22:51.461 }' 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:51.461 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.719 "name": "raid_bdev1", 00:22:51.719 "uuid": "92054912-df07-4110-b350-067f56cd1337", 00:22:51.719 "strip_size_kb": 0, 00:22:51.719 "state": "online", 00:22:51.719 "raid_level": "raid1", 00:22:51.719 "superblock": false, 00:22:51.719 "num_base_bdevs": 2, 00:22:51.719 "num_base_bdevs_discovered": 2, 00:22:51.719 "num_base_bdevs_operational": 2, 00:22:51.719 "base_bdevs_list": [ 00:22:51.719 { 00:22:51.719 "name": "spare", 00:22:51.719 "uuid": "c00c2012-4a19-56d2-8558-8cd616fedfd3", 00:22:51.719 "is_configured": true, 00:22:51.719 "data_offset": 0, 00:22:51.719 "data_size": 65536 00:22:51.719 }, 00:22:51.719 { 00:22:51.719 "name": "BaseBdev2", 00:22:51.719 "uuid": "29e3371b-f954-5efc-9c7d-a054a153504b", 00:22:51.719 "is_configured": true, 00:22:51.719 "data_offset": 0, 00:22:51.719 "data_size": 65536 00:22:51.719 } 00:22:51.719 ] 00:22:51.719 }' 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.719 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.978 [2024-11-08 17:10:28.505189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:51.978 [2024-11-08 17:10:28.505330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.978 [2024-11-08 17:10:28.505435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.978 [2024-11-08 17:10:28.505509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.978 [2024-11-08 17:10:28.505538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:51.978 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:52.236 /dev/nbd0 00:22:52.236 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:52.236 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:52.236 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:52.237 1+0 records in 00:22:52.237 1+0 records out 00:22:52.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756187 s, 5.4 MB/s 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:52.237 17:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:52.495 /dev/nbd1 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:52.495 1+0 records in 00:22:52.495 1+0 records out 00:22:52.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441015 s, 9.3 MB/s 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.495 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.752 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73689 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 73689 ']' 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 73689 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73689 00:22:53.012 killing process with pid 73689 00:22:53.012 Received shutdown signal, test time was about 60.000000 seconds 00:22:53.012 00:22:53.012 Latency(us) 00:22:53.012 [2024-11-08T17:10:29.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.012 [2024-11-08T17:10:29.727Z] =================================================================================================================== 00:22:53.012 [2024-11-08T17:10:29.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73689' 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 73689 00:22:53.012 17:10:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 73689 00:22:53.012 [2024-11-08 17:10:29.659544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:53.270 [2024-11-08 17:10:29.861666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:54.211 00:22:54.211 real 0m23.128s 00:22:54.211 user 0m22.016s 00:22:54.211 sys 0m6.094s 00:22:54.211 ************************************ 00:22:54.211 END TEST raid_rebuild_test 00:22:54.211 ************************************ 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.211 17:10:30 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:22:54.211 17:10:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:54.211 17:10:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:54.211 17:10:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:54.211 ************************************ 00:22:54.211 START TEST raid_rebuild_test_sb 00:22:54.211 ************************************ 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:54.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=74185 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 74185 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74185 ']' 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.211 17:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:54.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:54.211 Zero copy mechanism will not be used. 00:22:54.211 [2024-11-08 17:10:30.788765] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:22:54.211 [2024-11-08 17:10:30.788909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74185 ] 00:22:54.472 [2024-11-08 17:10:30.954928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.472 [2024-11-08 17:10:31.086015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.734 [2024-11-08 17:10:31.246198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:54.734 [2024-11-08 17:10:31.246273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.025 BaseBdev1_malloc 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.025 [2024-11-08 17:10:31.714491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.025 [2024-11-08 17:10:31.714799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.025 [2024-11-08 17:10:31.714841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:55.025 [2024-11-08 17:10:31.714857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.025 [2024-11-08 17:10:31.717658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.025 [2024-11-08 17:10:31.717712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.025 BaseBdev1 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.025 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.287 BaseBdev2_malloc 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.287 [2024-11-08 17:10:31.767107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:55.287 [2024-11-08 17:10:31.767209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.287 [2024-11-08 17:10:31.767236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:55.287 [2024-11-08 17:10:31.767254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.287 [2024-11-08 17:10:31.770009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.287 [2024-11-08 17:10:31.770063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:55.287 BaseBdev2 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.287 spare_malloc 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.287 spare_delay 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.287 [2024-11-08 17:10:31.846355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:55.287 [2024-11-08 17:10:31.846443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.287 [2024-11-08 17:10:31.846473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:55.287 [2024-11-08 17:10:31.846486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.287 [2024-11-08 17:10:31.849244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.287 [2024-11-08 17:10:31.849296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:55.287 spare 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.287 [2024-11-08 17:10:31.854425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.287 [2024-11-08 17:10:31.856902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:55.287 [2024-11-08 17:10:31.857110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:55.287 [2024-11-08 17:10:31.857129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:55.287 [2024-11-08 17:10:31.857455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:55.287 [2024-11-08 17:10:31.857691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:55.287 [2024-11-08 17:10:31.857702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:55.287 [2024-11-08 17:10:31.858088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.287 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.287 "name": "raid_bdev1", 00:22:55.287 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:22:55.287 "strip_size_kb": 0, 00:22:55.287 "state": "online", 00:22:55.287 "raid_level": "raid1", 00:22:55.287 "superblock": true, 00:22:55.287 "num_base_bdevs": 2, 00:22:55.287 "num_base_bdevs_discovered": 2, 00:22:55.287 "num_base_bdevs_operational": 2, 00:22:55.287 "base_bdevs_list": [ 00:22:55.287 { 00:22:55.287 "name": "BaseBdev1", 00:22:55.287 "uuid": "35fdd4da-5dbf-5a10-8004-fff61c334c15", 00:22:55.288 "is_configured": true, 00:22:55.288 "data_offset": 2048, 00:22:55.288 "data_size": 63488 00:22:55.288 }, 00:22:55.288 { 00:22:55.288 "name": "BaseBdev2", 00:22:55.288 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:22:55.288 "is_configured": true, 00:22:55.288 "data_offset": 2048, 00:22:55.288 "data_size": 63488 00:22:55.288 } 00:22:55.288 ] 00:22:55.288 }' 00:22:55.288 17:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.288 17:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.547 [2024-11-08 17:10:32.210894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.547 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:55.808 [2024-11-08 17:10:32.470667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:55.808 /dev/nbd0 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:55.808 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:55.809 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:55.809 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:55.809 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.809 1+0 records in 00:22:55.809 1+0 records out 00:22:55.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520016 s, 7.9 MB/s 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:56.067 17:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:04.190 63488+0 records in 00:23:04.190 63488+0 records out 00:23:04.190 32505856 bytes (33 MB, 31 MiB) copied, 7.24532 s, 4.5 MB/s 00:23:04.190 17:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:04.190 17:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:04.190 17:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:04.190 17:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:04.190 17:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:04.190 17:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.190 17:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:04.190 [2024-11-08 17:10:40.034176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.190 [2024-11-08 17:10:40.074543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.190 "name": "raid_bdev1", 00:23:04.190 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:04.190 "strip_size_kb": 0, 00:23:04.190 "state": "online", 00:23:04.190 "raid_level": "raid1", 00:23:04.190 "superblock": true, 00:23:04.190 "num_base_bdevs": 2, 00:23:04.190 "num_base_bdevs_discovered": 1, 00:23:04.190 "num_base_bdevs_operational": 1, 00:23:04.190 "base_bdevs_list": [ 00:23:04.190 { 00:23:04.190 "name": null, 00:23:04.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.190 "is_configured": false, 00:23:04.190 "data_offset": 0, 00:23:04.190 "data_size": 63488 00:23:04.190 }, 00:23:04.190 { 00:23:04.190 "name": "BaseBdev2", 00:23:04.190 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:04.190 "is_configured": true, 00:23:04.190 "data_offset": 2048, 00:23:04.190 "data_size": 63488 00:23:04.190 } 00:23:04.190 ] 00:23:04.190 }' 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.190 [2024-11-08 17:10:40.422652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.190 [2024-11-08 17:10:40.437122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.190 17:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:04.190 [2024-11-08 17:10:40.439708] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.763 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.764 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:05.025 "name": "raid_bdev1", 00:23:05.025 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:05.025 "strip_size_kb": 0, 00:23:05.025 "state": "online", 00:23:05.025 "raid_level": "raid1", 00:23:05.025 "superblock": true, 00:23:05.025 "num_base_bdevs": 2, 00:23:05.025 "num_base_bdevs_discovered": 2, 00:23:05.025 "num_base_bdevs_operational": 2, 00:23:05.025 "process": { 00:23:05.025 "type": "rebuild", 00:23:05.025 "target": "spare", 00:23:05.025 "progress": { 00:23:05.025 "blocks": 20480, 00:23:05.025 "percent": 32 00:23:05.025 } 00:23:05.025 }, 00:23:05.025 "base_bdevs_list": [ 00:23:05.025 { 00:23:05.025 "name": "spare", 00:23:05.025 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:05.025 "is_configured": true, 00:23:05.025 "data_offset": 2048, 00:23:05.025 "data_size": 63488 00:23:05.025 }, 00:23:05.025 { 00:23:05.025 "name": "BaseBdev2", 00:23:05.025 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:05.025 "is_configured": true, 00:23:05.025 "data_offset": 2048, 00:23:05.025 "data_size": 63488 00:23:05.025 } 00:23:05.025 ] 00:23:05.025 }' 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.025 [2024-11-08 17:10:41.557414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:05.025 [2024-11-08 17:10:41.652411] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:05.025 [2024-11-08 17:10:41.652790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.025 [2024-11-08 17:10:41.652819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:05.025 [2024-11-08 17:10:41.652840] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.025 "name": "raid_bdev1", 00:23:05.025 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:05.025 "strip_size_kb": 0, 00:23:05.025 "state": "online", 00:23:05.025 "raid_level": "raid1", 00:23:05.025 "superblock": true, 00:23:05.025 "num_base_bdevs": 2, 00:23:05.025 "num_base_bdevs_discovered": 1, 00:23:05.025 "num_base_bdevs_operational": 1, 00:23:05.025 "base_bdevs_list": [ 00:23:05.025 { 00:23:05.025 "name": null, 00:23:05.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.025 "is_configured": false, 00:23:05.025 "data_offset": 0, 00:23:05.025 "data_size": 63488 00:23:05.025 }, 00:23:05.025 { 00:23:05.025 "name": "BaseBdev2", 00:23:05.025 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:05.025 "is_configured": true, 00:23:05.025 "data_offset": 2048, 00:23:05.025 "data_size": 63488 00:23:05.025 } 00:23:05.025 ] 00:23:05.025 }' 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.025 17:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:05.594 "name": "raid_bdev1", 00:23:05.594 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:05.594 "strip_size_kb": 0, 00:23:05.594 "state": "online", 00:23:05.594 "raid_level": "raid1", 00:23:05.594 "superblock": true, 00:23:05.594 "num_base_bdevs": 2, 00:23:05.594 "num_base_bdevs_discovered": 1, 00:23:05.594 "num_base_bdevs_operational": 1, 00:23:05.594 "base_bdevs_list": [ 00:23:05.594 { 00:23:05.594 "name": null, 00:23:05.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.594 "is_configured": false, 00:23:05.594 "data_offset": 0, 00:23:05.594 "data_size": 63488 00:23:05.594 }, 00:23:05.594 { 00:23:05.594 "name": "BaseBdev2", 00:23:05.594 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:05.594 "is_configured": true, 00:23:05.594 "data_offset": 2048, 00:23:05.594 "data_size": 63488 00:23:05.594 } 00:23:05.594 ] 00:23:05.594 }' 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:05.594 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:05.595 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:05.595 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:05.595 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:05.595 17:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.595 17:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.595 [2024-11-08 17:10:42.113153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.595 [2024-11-08 17:10:42.127280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:23:05.595 17:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.595 17:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:05.595 [2024-11-08 17:10:42.129965] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.569 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.569 "name": "raid_bdev1", 00:23:06.569 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:06.569 "strip_size_kb": 0, 00:23:06.569 "state": "online", 00:23:06.569 "raid_level": "raid1", 00:23:06.569 "superblock": true, 00:23:06.569 "num_base_bdevs": 2, 00:23:06.569 "num_base_bdevs_discovered": 2, 00:23:06.569 "num_base_bdevs_operational": 2, 00:23:06.569 "process": { 00:23:06.569 "type": "rebuild", 00:23:06.569 "target": "spare", 00:23:06.569 "progress": { 00:23:06.569 "blocks": 20480, 00:23:06.569 "percent": 32 00:23:06.569 } 00:23:06.569 }, 00:23:06.569 "base_bdevs_list": [ 00:23:06.569 { 00:23:06.569 "name": "spare", 00:23:06.570 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:06.570 "is_configured": true, 00:23:06.570 "data_offset": 2048, 00:23:06.570 "data_size": 63488 00:23:06.570 }, 00:23:06.570 { 00:23:06.570 "name": "BaseBdev2", 00:23:06.570 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:06.570 "is_configured": true, 00:23:06.570 "data_offset": 2048, 00:23:06.570 "data_size": 63488 00:23:06.570 } 00:23:06.570 ] 00:23:06.570 }' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:06.570 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=331 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.570 "name": "raid_bdev1", 00:23:06.570 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:06.570 "strip_size_kb": 0, 00:23:06.570 "state": "online", 00:23:06.570 "raid_level": "raid1", 00:23:06.570 "superblock": true, 00:23:06.570 "num_base_bdevs": 2, 00:23:06.570 "num_base_bdevs_discovered": 2, 00:23:06.570 "num_base_bdevs_operational": 2, 00:23:06.570 "process": { 00:23:06.570 "type": "rebuild", 00:23:06.570 "target": "spare", 00:23:06.570 "progress": { 00:23:06.570 "blocks": 22528, 00:23:06.570 "percent": 35 00:23:06.570 } 00:23:06.570 }, 00:23:06.570 "base_bdevs_list": [ 00:23:06.570 { 00:23:06.570 "name": "spare", 00:23:06.570 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:06.570 "is_configured": true, 00:23:06.570 "data_offset": 2048, 00:23:06.570 "data_size": 63488 00:23:06.570 }, 00:23:06.570 { 00:23:06.570 "name": "BaseBdev2", 00:23:06.570 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:06.570 "is_configured": true, 00:23:06.570 "data_offset": 2048, 00:23:06.570 "data_size": 63488 00:23:06.570 } 00:23:06.570 ] 00:23:06.570 }' 00:23:06.570 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.828 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.828 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.828 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.828 17:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.762 "name": "raid_bdev1", 00:23:07.762 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:07.762 "strip_size_kb": 0, 00:23:07.762 "state": "online", 00:23:07.762 "raid_level": "raid1", 00:23:07.762 "superblock": true, 00:23:07.762 "num_base_bdevs": 2, 00:23:07.762 "num_base_bdevs_discovered": 2, 00:23:07.762 "num_base_bdevs_operational": 2, 00:23:07.762 "process": { 00:23:07.762 "type": "rebuild", 00:23:07.762 "target": "spare", 00:23:07.762 "progress": { 00:23:07.762 "blocks": 45056, 00:23:07.762 "percent": 70 00:23:07.762 } 00:23:07.762 }, 00:23:07.762 "base_bdevs_list": [ 00:23:07.762 { 00:23:07.762 "name": "spare", 00:23:07.762 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:07.762 "is_configured": true, 00:23:07.762 "data_offset": 2048, 00:23:07.762 "data_size": 63488 00:23:07.762 }, 00:23:07.762 { 00:23:07.762 "name": "BaseBdev2", 00:23:07.762 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:07.762 "is_configured": true, 00:23:07.762 "data_offset": 2048, 00:23:07.762 "data_size": 63488 00:23:07.762 } 00:23:07.762 ] 00:23:07.762 }' 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.762 17:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:08.695 [2024-11-08 17:10:45.250177] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:08.695 [2024-11-08 17:10:45.250277] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:08.695 [2024-11-08 17:10:45.250412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.953 "name": "raid_bdev1", 00:23:08.953 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:08.953 "strip_size_kb": 0, 00:23:08.953 "state": "online", 00:23:08.953 "raid_level": "raid1", 00:23:08.953 "superblock": true, 00:23:08.953 "num_base_bdevs": 2, 00:23:08.953 "num_base_bdevs_discovered": 2, 00:23:08.953 "num_base_bdevs_operational": 2, 00:23:08.953 "base_bdevs_list": [ 00:23:08.953 { 00:23:08.953 "name": "spare", 00:23:08.953 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:08.953 "is_configured": true, 00:23:08.953 "data_offset": 2048, 00:23:08.953 "data_size": 63488 00:23:08.953 }, 00:23:08.953 { 00:23:08.953 "name": "BaseBdev2", 00:23:08.953 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:08.953 "is_configured": true, 00:23:08.953 "data_offset": 2048, 00:23:08.953 "data_size": 63488 00:23:08.953 } 00:23:08.953 ] 00:23:08.953 }' 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.953 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.953 "name": "raid_bdev1", 00:23:08.953 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:08.953 "strip_size_kb": 0, 00:23:08.953 "state": "online", 00:23:08.953 "raid_level": "raid1", 00:23:08.953 "superblock": true, 00:23:08.953 "num_base_bdevs": 2, 00:23:08.953 "num_base_bdevs_discovered": 2, 00:23:08.953 "num_base_bdevs_operational": 2, 00:23:08.953 "base_bdevs_list": [ 00:23:08.953 { 00:23:08.953 "name": "spare", 00:23:08.953 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:08.953 "is_configured": true, 00:23:08.953 "data_offset": 2048, 00:23:08.953 "data_size": 63488 00:23:08.953 }, 00:23:08.953 { 00:23:08.953 "name": "BaseBdev2", 00:23:08.953 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:08.953 "is_configured": true, 00:23:08.953 "data_offset": 2048, 00:23:08.953 "data_size": 63488 00:23:08.953 } 00:23:08.953 ] 00:23:08.954 }' 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.954 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.256 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.256 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.256 "name": "raid_bdev1", 00:23:09.256 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:09.256 "strip_size_kb": 0, 00:23:09.256 "state": "online", 00:23:09.256 "raid_level": "raid1", 00:23:09.256 "superblock": true, 00:23:09.256 "num_base_bdevs": 2, 00:23:09.256 "num_base_bdevs_discovered": 2, 00:23:09.256 "num_base_bdevs_operational": 2, 00:23:09.256 "base_bdevs_list": [ 00:23:09.256 { 00:23:09.256 "name": "spare", 00:23:09.256 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:09.256 "is_configured": true, 00:23:09.256 "data_offset": 2048, 00:23:09.256 "data_size": 63488 00:23:09.256 }, 00:23:09.256 { 00:23:09.256 "name": "BaseBdev2", 00:23:09.256 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:09.256 "is_configured": true, 00:23:09.256 "data_offset": 2048, 00:23:09.256 "data_size": 63488 00:23:09.256 } 00:23:09.256 ] 00:23:09.256 }' 00:23:09.256 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.256 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.513 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:09.513 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.513 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.514 [2024-11-08 17:10:45.974847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:09.514 [2024-11-08 17:10:45.974882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:09.514 [2024-11-08 17:10:45.974969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:09.514 [2024-11-08 17:10:45.975051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:09.514 [2024-11-08 17:10:45.975062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:09.514 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.514 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:09.514 17:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.514 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.514 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.514 17:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.514 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:09.514 /dev/nbd0 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.771 1+0 records in 00:23:09.771 1+0 records out 00:23:09.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436961 s, 9.4 MB/s 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:09.771 /dev/nbd1 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:09.771 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:09.772 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:09.772 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:09.772 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:09.772 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.030 1+0 records in 00:23:10.030 1+0 records out 00:23:10.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658125 s, 6.2 MB/s 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.030 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.287 17:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 [2024-11-08 17:10:47.095631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.546 [2024-11-08 17:10:47.095693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.546 [2024-11-08 17:10:47.095718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:10.546 [2024-11-08 17:10:47.095729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.546 [2024-11-08 17:10:47.098160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.546 [2024-11-08 17:10:47.098296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.546 [2024-11-08 17:10:47.098418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:10.546 [2024-11-08 17:10:47.098469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:10.546 [2024-11-08 17:10:47.098621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.546 spare 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 [2024-11-08 17:10:47.198719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:10.546 [2024-11-08 17:10:47.198771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:10.546 [2024-11-08 17:10:47.199119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:23:10.546 [2024-11-08 17:10:47.199313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:10.546 [2024-11-08 17:10:47.199328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:10.546 [2024-11-08 17:10:47.199516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.546 "name": "raid_bdev1", 00:23:10.546 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:10.546 "strip_size_kb": 0, 00:23:10.546 "state": "online", 00:23:10.546 "raid_level": "raid1", 00:23:10.546 "superblock": true, 00:23:10.546 "num_base_bdevs": 2, 00:23:10.546 "num_base_bdevs_discovered": 2, 00:23:10.546 "num_base_bdevs_operational": 2, 00:23:10.546 "base_bdevs_list": [ 00:23:10.546 { 00:23:10.546 "name": "spare", 00:23:10.546 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:10.546 "is_configured": true, 00:23:10.546 "data_offset": 2048, 00:23:10.546 "data_size": 63488 00:23:10.546 }, 00:23:10.546 { 00:23:10.546 "name": "BaseBdev2", 00:23:10.546 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:10.546 "is_configured": true, 00:23:10.546 "data_offset": 2048, 00:23:10.546 "data_size": 63488 00:23:10.546 } 00:23:10.546 ] 00:23:10.546 }' 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.546 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.804 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:10.804 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.804 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:10.804 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:10.804 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.804 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.804 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:11.063 "name": "raid_bdev1", 00:23:11.063 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:11.063 "strip_size_kb": 0, 00:23:11.063 "state": "online", 00:23:11.063 "raid_level": "raid1", 00:23:11.063 "superblock": true, 00:23:11.063 "num_base_bdevs": 2, 00:23:11.063 "num_base_bdevs_discovered": 2, 00:23:11.063 "num_base_bdevs_operational": 2, 00:23:11.063 "base_bdevs_list": [ 00:23:11.063 { 00:23:11.063 "name": "spare", 00:23:11.063 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:11.063 "is_configured": true, 00:23:11.063 "data_offset": 2048, 00:23:11.063 "data_size": 63488 00:23:11.063 }, 00:23:11.063 { 00:23:11.063 "name": "BaseBdev2", 00:23:11.063 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:11.063 "is_configured": true, 00:23:11.063 "data_offset": 2048, 00:23:11.063 "data_size": 63488 00:23:11.063 } 00:23:11.063 ] 00:23:11.063 }' 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.063 [2024-11-08 17:10:47.651856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.063 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.063 "name": "raid_bdev1", 00:23:11.063 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:11.063 "strip_size_kb": 0, 00:23:11.063 "state": "online", 00:23:11.063 "raid_level": "raid1", 00:23:11.063 "superblock": true, 00:23:11.063 "num_base_bdevs": 2, 00:23:11.064 "num_base_bdevs_discovered": 1, 00:23:11.064 "num_base_bdevs_operational": 1, 00:23:11.064 "base_bdevs_list": [ 00:23:11.064 { 00:23:11.064 "name": null, 00:23:11.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.064 "is_configured": false, 00:23:11.064 "data_offset": 0, 00:23:11.064 "data_size": 63488 00:23:11.064 }, 00:23:11.064 { 00:23:11.064 "name": "BaseBdev2", 00:23:11.064 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:11.064 "is_configured": true, 00:23:11.064 "data_offset": 2048, 00:23:11.064 "data_size": 63488 00:23:11.064 } 00:23:11.064 ] 00:23:11.064 }' 00:23:11.064 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.064 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.322 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:11.322 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.322 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.322 [2024-11-08 17:10:47.983937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:11.322 [2024-11-08 17:10:47.984269] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:11.322 [2024-11-08 17:10:47.984295] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:11.322 [2024-11-08 17:10:47.984338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:11.322 [2024-11-08 17:10:47.996033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:23:11.322 17:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.322 17:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:11.322 [2024-11-08 17:10:47.998079] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:12.695 17:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.695 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:12.695 "name": "raid_bdev1", 00:23:12.695 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:12.695 "strip_size_kb": 0, 00:23:12.695 "state": "online", 00:23:12.695 "raid_level": "raid1", 00:23:12.695 "superblock": true, 00:23:12.695 "num_base_bdevs": 2, 00:23:12.695 "num_base_bdevs_discovered": 2, 00:23:12.695 "num_base_bdevs_operational": 2, 00:23:12.695 "process": { 00:23:12.695 "type": "rebuild", 00:23:12.695 "target": "spare", 00:23:12.695 "progress": { 00:23:12.695 "blocks": 20480, 00:23:12.695 "percent": 32 00:23:12.695 } 00:23:12.695 }, 00:23:12.695 "base_bdevs_list": [ 00:23:12.695 { 00:23:12.695 "name": "spare", 00:23:12.695 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:12.695 "is_configured": true, 00:23:12.695 "data_offset": 2048, 00:23:12.695 "data_size": 63488 00:23:12.695 }, 00:23:12.696 { 00:23:12.696 "name": "BaseBdev2", 00:23:12.696 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:12.696 "is_configured": true, 00:23:12.696 "data_offset": 2048, 00:23:12.696 "data_size": 63488 00:23:12.696 } 00:23:12.696 ] 00:23:12.696 }' 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.696 [2024-11-08 17:10:49.112610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:12.696 [2024-11-08 17:10:49.205738] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:12.696 [2024-11-08 17:10:49.206014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.696 [2024-11-08 17:10:49.206037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:12.696 [2024-11-08 17:10:49.206049] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.696 "name": "raid_bdev1", 00:23:12.696 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:12.696 "strip_size_kb": 0, 00:23:12.696 "state": "online", 00:23:12.696 "raid_level": "raid1", 00:23:12.696 "superblock": true, 00:23:12.696 "num_base_bdevs": 2, 00:23:12.696 "num_base_bdevs_discovered": 1, 00:23:12.696 "num_base_bdevs_operational": 1, 00:23:12.696 "base_bdevs_list": [ 00:23:12.696 { 00:23:12.696 "name": null, 00:23:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.696 "is_configured": false, 00:23:12.696 "data_offset": 0, 00:23:12.696 "data_size": 63488 00:23:12.696 }, 00:23:12.696 { 00:23:12.696 "name": "BaseBdev2", 00:23:12.696 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:12.696 "is_configured": true, 00:23:12.696 "data_offset": 2048, 00:23:12.696 "data_size": 63488 00:23:12.696 } 00:23:12.696 ] 00:23:12.696 }' 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.696 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.953 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:12.953 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.953 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.953 [2024-11-08 17:10:49.562632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:12.953 [2024-11-08 17:10:49.562710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.953 [2024-11-08 17:10:49.562733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:12.953 [2024-11-08 17:10:49.562745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.954 [2024-11-08 17:10:49.563260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.954 [2024-11-08 17:10:49.563278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:12.954 [2024-11-08 17:10:49.563378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:12.954 [2024-11-08 17:10:49.563392] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:12.954 [2024-11-08 17:10:49.563404] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:12.954 [2024-11-08 17:10:49.563424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:12.954 [2024-11-08 17:10:49.575161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:23:12.954 spare 00:23:12.954 17:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.954 17:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:12.954 [2024-11-08 17:10:49.577206] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:13.884 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.884 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.884 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:13.884 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:13.884 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.884 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.885 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.885 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.885 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.142 "name": "raid_bdev1", 00:23:14.142 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:14.142 "strip_size_kb": 0, 00:23:14.142 "state": "online", 00:23:14.142 "raid_level": "raid1", 00:23:14.142 "superblock": true, 00:23:14.142 "num_base_bdevs": 2, 00:23:14.142 "num_base_bdevs_discovered": 2, 00:23:14.142 "num_base_bdevs_operational": 2, 00:23:14.142 "process": { 00:23:14.142 "type": "rebuild", 00:23:14.142 "target": "spare", 00:23:14.142 "progress": { 00:23:14.142 "blocks": 20480, 00:23:14.142 "percent": 32 00:23:14.142 } 00:23:14.142 }, 00:23:14.142 "base_bdevs_list": [ 00:23:14.142 { 00:23:14.142 "name": "spare", 00:23:14.142 "uuid": "5fe9c0b3-3c6f-57f5-a507-bdb0d6af6393", 00:23:14.142 "is_configured": true, 00:23:14.142 "data_offset": 2048, 00:23:14.142 "data_size": 63488 00:23:14.142 }, 00:23:14.142 { 00:23:14.142 "name": "BaseBdev2", 00:23:14.142 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:14.142 "is_configured": true, 00:23:14.142 "data_offset": 2048, 00:23:14.142 "data_size": 63488 00:23:14.142 } 00:23:14.142 ] 00:23:14.142 }' 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.142 [2024-11-08 17:10:50.687223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.142 [2024-11-08 17:10:50.784915] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:14.142 [2024-11-08 17:10:50.785007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.142 [2024-11-08 17:10:50.785026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.142 [2024-11-08 17:10:50.785035] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.142 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.143 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.143 "name": "raid_bdev1", 00:23:14.143 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:14.143 "strip_size_kb": 0, 00:23:14.143 "state": "online", 00:23:14.143 "raid_level": "raid1", 00:23:14.143 "superblock": true, 00:23:14.143 "num_base_bdevs": 2, 00:23:14.143 "num_base_bdevs_discovered": 1, 00:23:14.143 "num_base_bdevs_operational": 1, 00:23:14.143 "base_bdevs_list": [ 00:23:14.143 { 00:23:14.143 "name": null, 00:23:14.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.143 "is_configured": false, 00:23:14.143 "data_offset": 0, 00:23:14.143 "data_size": 63488 00:23:14.143 }, 00:23:14.143 { 00:23:14.143 "name": "BaseBdev2", 00:23:14.143 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:14.143 "is_configured": true, 00:23:14.143 "data_offset": 2048, 00:23:14.143 "data_size": 63488 00:23:14.143 } 00:23:14.143 ] 00:23:14.143 }' 00:23:14.143 17:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.143 17:10:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.708 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.708 "name": "raid_bdev1", 00:23:14.708 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:14.708 "strip_size_kb": 0, 00:23:14.708 "state": "online", 00:23:14.708 "raid_level": "raid1", 00:23:14.708 "superblock": true, 00:23:14.708 "num_base_bdevs": 2, 00:23:14.708 "num_base_bdevs_discovered": 1, 00:23:14.708 "num_base_bdevs_operational": 1, 00:23:14.708 "base_bdevs_list": [ 00:23:14.708 { 00:23:14.708 "name": null, 00:23:14.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.709 "is_configured": false, 00:23:14.709 "data_offset": 0, 00:23:14.709 "data_size": 63488 00:23:14.709 }, 00:23:14.709 { 00:23:14.709 "name": "BaseBdev2", 00:23:14.709 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:14.709 "is_configured": true, 00:23:14.709 "data_offset": 2048, 00:23:14.709 "data_size": 63488 00:23:14.709 } 00:23:14.709 ] 00:23:14.709 }' 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.709 [2024-11-08 17:10:51.277473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:14.709 [2024-11-08 17:10:51.277540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.709 [2024-11-08 17:10:51.277564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:14.709 [2024-11-08 17:10:51.277581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.709 [2024-11-08 17:10:51.278065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.709 [2024-11-08 17:10:51.278088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:14.709 [2024-11-08 17:10:51.278174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:14.709 [2024-11-08 17:10:51.278190] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:14.709 [2024-11-08 17:10:51.278201] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:14.709 [2024-11-08 17:10:51.278212] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:14.709 BaseBdev1 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.709 17:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.642 "name": "raid_bdev1", 00:23:15.642 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:15.642 "strip_size_kb": 0, 00:23:15.642 "state": "online", 00:23:15.642 "raid_level": "raid1", 00:23:15.642 "superblock": true, 00:23:15.642 "num_base_bdevs": 2, 00:23:15.642 "num_base_bdevs_discovered": 1, 00:23:15.642 "num_base_bdevs_operational": 1, 00:23:15.642 "base_bdevs_list": [ 00:23:15.642 { 00:23:15.642 "name": null, 00:23:15.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.642 "is_configured": false, 00:23:15.642 "data_offset": 0, 00:23:15.642 "data_size": 63488 00:23:15.642 }, 00:23:15.642 { 00:23:15.642 "name": "BaseBdev2", 00:23:15.642 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:15.642 "is_configured": true, 00:23:15.642 "data_offset": 2048, 00:23:15.642 "data_size": 63488 00:23:15.642 } 00:23:15.642 ] 00:23:15.642 }' 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.642 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.899 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:15.899 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.900 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:15.900 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:15.900 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.900 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.900 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.900 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.900 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.156 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.156 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:16.156 "name": "raid_bdev1", 00:23:16.156 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:16.156 "strip_size_kb": 0, 00:23:16.156 "state": "online", 00:23:16.156 "raid_level": "raid1", 00:23:16.156 "superblock": true, 00:23:16.156 "num_base_bdevs": 2, 00:23:16.156 "num_base_bdevs_discovered": 1, 00:23:16.156 "num_base_bdevs_operational": 1, 00:23:16.156 "base_bdevs_list": [ 00:23:16.157 { 00:23:16.157 "name": null, 00:23:16.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.157 "is_configured": false, 00:23:16.157 "data_offset": 0, 00:23:16.157 "data_size": 63488 00:23:16.157 }, 00:23:16.157 { 00:23:16.157 "name": "BaseBdev2", 00:23:16.157 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:16.157 "is_configured": true, 00:23:16.157 "data_offset": 2048, 00:23:16.157 "data_size": 63488 00:23:16.157 } 00:23:16.157 ] 00:23:16.157 }' 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.157 [2024-11-08 17:10:52.705893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:16.157 [2024-11-08 17:10:52.706054] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:16.157 [2024-11-08 17:10:52.706068] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:16.157 request: 00:23:16.157 { 00:23:16.157 "base_bdev": "BaseBdev1", 00:23:16.157 "raid_bdev": "raid_bdev1", 00:23:16.157 "method": "bdev_raid_add_base_bdev", 00:23:16.157 "req_id": 1 00:23:16.157 } 00:23:16.157 Got JSON-RPC error response 00:23:16.157 response: 00:23:16.157 { 00:23:16.157 "code": -22, 00:23:16.157 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:16.157 } 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.157 17:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.141 "name": "raid_bdev1", 00:23:17.141 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:17.141 "strip_size_kb": 0, 00:23:17.141 "state": "online", 00:23:17.141 "raid_level": "raid1", 00:23:17.141 "superblock": true, 00:23:17.141 "num_base_bdevs": 2, 00:23:17.141 "num_base_bdevs_discovered": 1, 00:23:17.141 "num_base_bdevs_operational": 1, 00:23:17.141 "base_bdevs_list": [ 00:23:17.141 { 00:23:17.141 "name": null, 00:23:17.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.141 "is_configured": false, 00:23:17.141 "data_offset": 0, 00:23:17.141 "data_size": 63488 00:23:17.141 }, 00:23:17.141 { 00:23:17.141 "name": "BaseBdev2", 00:23:17.141 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:17.141 "is_configured": true, 00:23:17.141 "data_offset": 2048, 00:23:17.141 "data_size": 63488 00:23:17.141 } 00:23:17.141 ] 00:23:17.141 }' 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.141 17:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.399 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:17.399 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.399 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:17.399 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.400 "name": "raid_bdev1", 00:23:17.400 "uuid": "45985d5e-a4c4-4e9c-9131-4ac81b917dec", 00:23:17.400 "strip_size_kb": 0, 00:23:17.400 "state": "online", 00:23:17.400 "raid_level": "raid1", 00:23:17.400 "superblock": true, 00:23:17.400 "num_base_bdevs": 2, 00:23:17.400 "num_base_bdevs_discovered": 1, 00:23:17.400 "num_base_bdevs_operational": 1, 00:23:17.400 "base_bdevs_list": [ 00:23:17.400 { 00:23:17.400 "name": null, 00:23:17.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.400 "is_configured": false, 00:23:17.400 "data_offset": 0, 00:23:17.400 "data_size": 63488 00:23:17.400 }, 00:23:17.400 { 00:23:17.400 "name": "BaseBdev2", 00:23:17.400 "uuid": "7d118cb0-b1b3-5c31-b469-126bc435e573", 00:23:17.400 "is_configured": true, 00:23:17.400 "data_offset": 2048, 00:23:17.400 "data_size": 63488 00:23:17.400 } 00:23:17.400 ] 00:23:17.400 }' 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:17.400 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 74185 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74185 ']' 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 74185 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74185 00:23:17.659 killing process with pid 74185 00:23:17.659 Received shutdown signal, test time was about 60.000000 seconds 00:23:17.659 00:23:17.659 Latency(us) 00:23:17.659 [2024-11-08T17:10:54.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.659 [2024-11-08T17:10:54.374Z] =================================================================================================================== 00:23:17.659 [2024-11-08T17:10:54.374Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74185' 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 74185 00:23:17.659 [2024-11-08 17:10:54.155213] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.659 17:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 74185 00:23:17.659 [2024-11-08 17:10:54.155350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.659 [2024-11-08 17:10:54.155404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.659 [2024-11-08 17:10:54.155416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:17.659 [2024-11-08 17:10:54.350107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:18.593 ************************************ 00:23:18.593 END TEST raid_rebuild_test_sb 00:23:18.593 ************************************ 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:18.593 00:23:18.593 real 0m24.373s 00:23:18.593 user 0m27.256s 00:23:18.593 sys 0m4.264s 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.593 17:10:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:23:18.593 17:10:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:18.593 17:10:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:18.593 17:10:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.593 ************************************ 00:23:18.593 START TEST raid_rebuild_test_io 00:23:18.593 ************************************ 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:18.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74932 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74932 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 74932 ']' 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:18.593 17:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:18.593 [2024-11-08 17:10:55.229884] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:23:18.593 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:18.593 Zero copy mechanism will not be used. 00:23:18.593 [2024-11-08 17:10:55.230189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74932 ] 00:23:18.852 [2024-11-08 17:10:55.385092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.852 [2024-11-08 17:10:55.499273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.110 [2024-11-08 17:10:55.646198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.110 [2024-11-08 17:10:55.646251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.368 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:19.368 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:23:19.368 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:19.368 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:19.368 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.368 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.626 BaseBdev1_malloc 00:23:19.626 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.626 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:19.626 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.626 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.626 [2024-11-08 17:10:56.108557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:19.626 [2024-11-08 17:10:56.108623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.626 [2024-11-08 17:10:56.108643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:19.626 [2024-11-08 17:10:56.108654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.626 [2024-11-08 17:10:56.110933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.626 [2024-11-08 17:10:56.110971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:19.626 BaseBdev1 00:23:19.626 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.627 BaseBdev2_malloc 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.627 [2024-11-08 17:10:56.146304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:19.627 [2024-11-08 17:10:56.146357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.627 [2024-11-08 17:10:56.146373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:19.627 [2024-11-08 17:10:56.146384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.627 [2024-11-08 17:10:56.148561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.627 [2024-11-08 17:10:56.148695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:19.627 BaseBdev2 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.627 spare_malloc 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.627 spare_delay 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.627 [2024-11-08 17:10:56.204251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:19.627 [2024-11-08 17:10:56.204304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.627 [2024-11-08 17:10:56.204325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:19.627 [2024-11-08 17:10:56.204337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.627 [2024-11-08 17:10:56.206575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.627 [2024-11-08 17:10:56.206707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:19.627 spare 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.627 [2024-11-08 17:10:56.212309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:19.627 [2024-11-08 17:10:56.214315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:19.627 [2024-11-08 17:10:56.214470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:19.627 [2024-11-08 17:10:56.214507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:19.627 [2024-11-08 17:10:56.214878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:19.627 [2024-11-08 17:10:56.215089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:19.627 [2024-11-08 17:10:56.215121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:19.627 [2024-11-08 17:10:56.215314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.627 "name": "raid_bdev1", 00:23:19.627 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:19.627 "strip_size_kb": 0, 00:23:19.627 "state": "online", 00:23:19.627 "raid_level": "raid1", 00:23:19.627 "superblock": false, 00:23:19.627 "num_base_bdevs": 2, 00:23:19.627 "num_base_bdevs_discovered": 2, 00:23:19.627 "num_base_bdevs_operational": 2, 00:23:19.627 "base_bdevs_list": [ 00:23:19.627 { 00:23:19.627 "name": "BaseBdev1", 00:23:19.627 "uuid": "e6480923-9961-575d-a65b-6799d832cd15", 00:23:19.627 "is_configured": true, 00:23:19.627 "data_offset": 0, 00:23:19.627 "data_size": 65536 00:23:19.627 }, 00:23:19.627 { 00:23:19.627 "name": "BaseBdev2", 00:23:19.627 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:19.627 "is_configured": true, 00:23:19.627 "data_offset": 0, 00:23:19.627 "data_size": 65536 00:23:19.627 } 00:23:19.627 ] 00:23:19.627 }' 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.627 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:19.885 [2024-11-08 17:10:56.500688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:19.885 [2024-11-08 17:10:56.564368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:19.885 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.167 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.167 "name": "raid_bdev1", 00:23:20.167 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:20.167 "strip_size_kb": 0, 00:23:20.167 "state": "online", 00:23:20.167 "raid_level": "raid1", 00:23:20.167 "superblock": false, 00:23:20.167 "num_base_bdevs": 2, 00:23:20.167 "num_base_bdevs_discovered": 1, 00:23:20.167 "num_base_bdevs_operational": 1, 00:23:20.167 "base_bdevs_list": [ 00:23:20.167 { 00:23:20.167 "name": null, 00:23:20.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.168 "is_configured": false, 00:23:20.168 "data_offset": 0, 00:23:20.168 "data_size": 65536 00:23:20.168 }, 00:23:20.168 { 00:23:20.168 "name": "BaseBdev2", 00:23:20.168 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:20.168 "is_configured": true, 00:23:20.168 "data_offset": 0, 00:23:20.168 "data_size": 65536 00:23:20.168 } 00:23:20.168 ] 00:23:20.168 }' 00:23:20.168 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.168 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:20.168 [2024-11-08 17:10:56.654302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:20.168 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:20.168 Zero copy mechanism will not be used. 00:23:20.168 Running I/O for 60 seconds... 00:23:20.168 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:20.168 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.168 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:20.425 [2024-11-08 17:10:56.867319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.425 17:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.425 17:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:20.425 [2024-11-08 17:10:56.917530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:20.425 [2024-11-08 17:10:56.919616] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:20.425 [2024-11-08 17:10:57.034205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:20.425 [2024-11-08 17:10:57.034728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:20.683 [2024-11-08 17:10:57.236589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:20.683 [2024-11-08 17:10:57.236922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:20.940 [2024-11-08 17:10:57.477280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:20.940 [2024-11-08 17:10:57.477773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:21.198 136.00 IOPS, 408.00 MiB/s [2024-11-08T17:10:57.913Z] [2024-11-08 17:10:57.688658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:21.198 [2024-11-08 17:10:57.689069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:21.455 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.455 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:21.456 "name": "raid_bdev1", 00:23:21.456 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:21.456 "strip_size_kb": 0, 00:23:21.456 "state": "online", 00:23:21.456 "raid_level": "raid1", 00:23:21.456 "superblock": false, 00:23:21.456 "num_base_bdevs": 2, 00:23:21.456 "num_base_bdevs_discovered": 2, 00:23:21.456 "num_base_bdevs_operational": 2, 00:23:21.456 "process": { 00:23:21.456 "type": "rebuild", 00:23:21.456 "target": "spare", 00:23:21.456 "progress": { 00:23:21.456 "blocks": 12288, 00:23:21.456 "percent": 18 00:23:21.456 } 00:23:21.456 }, 00:23:21.456 "base_bdevs_list": [ 00:23:21.456 { 00:23:21.456 "name": "spare", 00:23:21.456 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:21.456 "is_configured": true, 00:23:21.456 "data_offset": 0, 00:23:21.456 "data_size": 65536 00:23:21.456 }, 00:23:21.456 { 00:23:21.456 "name": "BaseBdev2", 00:23:21.456 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:21.456 "is_configured": true, 00:23:21.456 "data_offset": 0, 00:23:21.456 "data_size": 65536 00:23:21.456 } 00:23:21.456 ] 00:23:21.456 }' 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:21.456 17:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:21.456 [2024-11-08 17:10:58.028447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:21.456 [2024-11-08 17:10:58.029569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:21.456 [2024-11-08 17:10:58.037277] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:21.456 [2024-11-08 17:10:58.045433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.456 [2024-11-08 17:10:58.045464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:21.456 [2024-11-08 17:10:58.045483] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:21.456 [2024-11-08 17:10:58.080091] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.456 "name": "raid_bdev1", 00:23:21.456 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:21.456 "strip_size_kb": 0, 00:23:21.456 "state": "online", 00:23:21.456 "raid_level": "raid1", 00:23:21.456 "superblock": false, 00:23:21.456 "num_base_bdevs": 2, 00:23:21.456 "num_base_bdevs_discovered": 1, 00:23:21.456 "num_base_bdevs_operational": 1, 00:23:21.456 "base_bdevs_list": [ 00:23:21.456 { 00:23:21.456 "name": null, 00:23:21.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.456 "is_configured": false, 00:23:21.456 "data_offset": 0, 00:23:21.456 "data_size": 65536 00:23:21.456 }, 00:23:21.456 { 00:23:21.456 "name": "BaseBdev2", 00:23:21.456 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:21.456 "is_configured": true, 00:23:21.456 "data_offset": 0, 00:23:21.456 "data_size": 65536 00:23:21.456 } 00:23:21.456 ] 00:23:21.456 }' 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.456 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.030 "name": "raid_bdev1", 00:23:22.030 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:22.030 "strip_size_kb": 0, 00:23:22.030 "state": "online", 00:23:22.030 "raid_level": "raid1", 00:23:22.030 "superblock": false, 00:23:22.030 "num_base_bdevs": 2, 00:23:22.030 "num_base_bdevs_discovered": 1, 00:23:22.030 "num_base_bdevs_operational": 1, 00:23:22.030 "base_bdevs_list": [ 00:23:22.030 { 00:23:22.030 "name": null, 00:23:22.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.030 "is_configured": false, 00:23:22.030 "data_offset": 0, 00:23:22.030 "data_size": 65536 00:23:22.030 }, 00:23:22.030 { 00:23:22.030 "name": "BaseBdev2", 00:23:22.030 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:22.030 "is_configured": true, 00:23:22.030 "data_offset": 0, 00:23:22.030 "data_size": 65536 00:23:22.030 } 00:23:22.030 ] 00:23:22.030 }' 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:22.030 [2024-11-08 17:10:58.555491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.030 17:10:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:22.030 [2024-11-08 17:10:58.605956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:22.030 [2024-11-08 17:10:58.608076] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:22.030 165.00 IOPS, 495.00 MiB/s [2024-11-08T17:10:58.745Z] [2024-11-08 17:10:58.723107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:22.030 [2024-11-08 17:10:58.723643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:22.317 [2024-11-08 17:10:58.947103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:22.317 [2024-11-08 17:10:58.947417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:22.882 [2024-11-08 17:10:59.297134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:22.882 [2024-11-08 17:10:59.297625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:22.882 [2024-11-08 17:10:59.540028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:23.146 "name": "raid_bdev1", 00:23:23.146 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:23.146 "strip_size_kb": 0, 00:23:23.146 "state": "online", 00:23:23.146 "raid_level": "raid1", 00:23:23.146 "superblock": false, 00:23:23.146 "num_base_bdevs": 2, 00:23:23.146 "num_base_bdevs_discovered": 2, 00:23:23.146 "num_base_bdevs_operational": 2, 00:23:23.146 "process": { 00:23:23.146 "type": "rebuild", 00:23:23.146 "target": "spare", 00:23:23.146 "progress": { 00:23:23.146 "blocks": 14336, 00:23:23.146 "percent": 21 00:23:23.146 } 00:23:23.146 }, 00:23:23.146 "base_bdevs_list": [ 00:23:23.146 { 00:23:23.146 "name": "spare", 00:23:23.146 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:23.146 "is_configured": true, 00:23:23.146 "data_offset": 0, 00:23:23.146 "data_size": 65536 00:23:23.146 }, 00:23:23.146 { 00:23:23.146 "name": "BaseBdev2", 00:23:23.146 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:23.146 "is_configured": true, 00:23:23.146 "data_offset": 0, 00:23:23.146 "data_size": 65536 00:23:23.146 } 00:23:23.146 ] 00:23:23.146 }' 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:23.146 [2024-11-08 17:10:59.651207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:23.146 139.67 IOPS, 419.00 MiB/s [2024-11-08T17:10:59.861Z] 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=347 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.146 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:23.146 "name": "raid_bdev1", 00:23:23.146 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:23.146 "strip_size_kb": 0, 00:23:23.146 "state": "online", 00:23:23.146 "raid_level": "raid1", 00:23:23.146 "superblock": false, 00:23:23.146 "num_base_bdevs": 2, 00:23:23.146 "num_base_bdevs_discovered": 2, 00:23:23.147 "num_base_bdevs_operational": 2, 00:23:23.147 "process": { 00:23:23.147 "type": "rebuild", 00:23:23.147 "target": "spare", 00:23:23.147 "progress": { 00:23:23.147 "blocks": 16384, 00:23:23.147 "percent": 25 00:23:23.147 } 00:23:23.147 }, 00:23:23.147 "base_bdevs_list": [ 00:23:23.147 { 00:23:23.147 "name": "spare", 00:23:23.147 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:23.147 "is_configured": true, 00:23:23.147 "data_offset": 0, 00:23:23.147 "data_size": 65536 00:23:23.147 }, 00:23:23.147 { 00:23:23.147 "name": "BaseBdev2", 00:23:23.147 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:23.147 "is_configured": true, 00:23:23.147 "data_offset": 0, 00:23:23.147 "data_size": 65536 00:23:23.147 } 00:23:23.147 ] 00:23:23.147 }' 00:23:23.147 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:23.147 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.147 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:23.147 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.147 17:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:23.411 [2024-11-08 17:11:00.075836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:23.411 [2024-11-08 17:11:00.076155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:24.017 [2024-11-08 17:11:00.388192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:24.017 [2024-11-08 17:11:00.597674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:24.017 [2024-11-08 17:11:00.598219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:24.275 124.00 IOPS, 372.00 MiB/s [2024-11-08T17:11:00.990Z] 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:24.275 "name": "raid_bdev1", 00:23:24.275 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:24.275 "strip_size_kb": 0, 00:23:24.275 "state": "online", 00:23:24.275 "raid_level": "raid1", 00:23:24.275 "superblock": false, 00:23:24.275 "num_base_bdevs": 2, 00:23:24.275 "num_base_bdevs_discovered": 2, 00:23:24.275 "num_base_bdevs_operational": 2, 00:23:24.275 "process": { 00:23:24.275 "type": "rebuild", 00:23:24.275 "target": "spare", 00:23:24.275 "progress": { 00:23:24.275 "blocks": 30720, 00:23:24.275 "percent": 46 00:23:24.275 } 00:23:24.275 }, 00:23:24.275 "base_bdevs_list": [ 00:23:24.275 { 00:23:24.275 "name": "spare", 00:23:24.275 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:24.275 "is_configured": true, 00:23:24.275 "data_offset": 0, 00:23:24.275 "data_size": 65536 00:23:24.275 }, 00:23:24.275 { 00:23:24.275 "name": "BaseBdev2", 00:23:24.275 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:24.275 "is_configured": true, 00:23:24.275 "data_offset": 0, 00:23:24.275 "data_size": 65536 00:23:24.275 } 00:23:24.275 ] 00:23:24.275 }' 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:24.275 17:11:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:24.275 [2024-11-08 17:11:00.925315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:24.533 [2024-11-08 17:11:01.151454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:24.791 [2024-11-08 17:11:01.400705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:25.306 107.00 IOPS, 321.00 MiB/s [2024-11-08T17:11:02.021Z] 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:25.306 "name": "raid_bdev1", 00:23:25.306 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:25.306 "strip_size_kb": 0, 00:23:25.306 "state": "online", 00:23:25.306 "raid_level": "raid1", 00:23:25.306 "superblock": false, 00:23:25.306 "num_base_bdevs": 2, 00:23:25.306 "num_base_bdevs_discovered": 2, 00:23:25.306 "num_base_bdevs_operational": 2, 00:23:25.306 "process": { 00:23:25.306 "type": "rebuild", 00:23:25.306 "target": "spare", 00:23:25.306 "progress": { 00:23:25.306 "blocks": 45056, 00:23:25.306 "percent": 68 00:23:25.306 } 00:23:25.306 }, 00:23:25.306 "base_bdevs_list": [ 00:23:25.306 { 00:23:25.306 "name": "spare", 00:23:25.306 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:25.306 "is_configured": true, 00:23:25.306 "data_offset": 0, 00:23:25.306 "data_size": 65536 00:23:25.306 }, 00:23:25.306 { 00:23:25.306 "name": "BaseBdev2", 00:23:25.306 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:25.306 "is_configured": true, 00:23:25.306 "data_offset": 0, 00:23:25.306 "data_size": 65536 00:23:25.306 } 00:23:25.306 ] 00:23:25.306 }' 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:25.306 17:11:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:25.306 17:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:25.306 17:11:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:26.497 94.33 IOPS, 283.00 MiB/s [2024-11-08T17:11:03.212Z] [2024-11-08 17:11:02.963607] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:26.497 "name": "raid_bdev1", 00:23:26.497 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:26.497 "strip_size_kb": 0, 00:23:26.497 "state": "online", 00:23:26.497 "raid_level": "raid1", 00:23:26.497 "superblock": false, 00:23:26.497 "num_base_bdevs": 2, 00:23:26.497 "num_base_bdevs_discovered": 2, 00:23:26.497 "num_base_bdevs_operational": 2, 00:23:26.497 "process": { 00:23:26.497 "type": "rebuild", 00:23:26.497 "target": "spare", 00:23:26.497 "progress": { 00:23:26.497 "blocks": 65536, 00:23:26.497 "percent": 100 00:23:26.497 } 00:23:26.497 }, 00:23:26.497 "base_bdevs_list": [ 00:23:26.497 { 00:23:26.497 "name": "spare", 00:23:26.497 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:26.497 "is_configured": true, 00:23:26.497 "data_offset": 0, 00:23:26.497 "data_size": 65536 00:23:26.497 }, 00:23:26.497 { 00:23:26.497 "name": "BaseBdev2", 00:23:26.497 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:26.497 "is_configured": true, 00:23:26.497 "data_offset": 0, 00:23:26.497 "data_size": 65536 00:23:26.497 } 00:23:26.497 ] 00:23:26.497 }' 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:26.497 [2024-11-08 17:11:03.070137] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:26.497 [2024-11-08 17:11:03.072744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:26.497 17:11:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:27.629 85.86 IOPS, 257.57 MiB/s [2024-11-08T17:11:04.344Z] 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:27.629 "name": "raid_bdev1", 00:23:27.629 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:27.629 "strip_size_kb": 0, 00:23:27.629 "state": "online", 00:23:27.629 "raid_level": "raid1", 00:23:27.629 "superblock": false, 00:23:27.629 "num_base_bdevs": 2, 00:23:27.629 "num_base_bdevs_discovered": 2, 00:23:27.629 "num_base_bdevs_operational": 2, 00:23:27.629 "base_bdevs_list": [ 00:23:27.629 { 00:23:27.629 "name": "spare", 00:23:27.629 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:27.629 "is_configured": true, 00:23:27.629 "data_offset": 0, 00:23:27.629 "data_size": 65536 00:23:27.629 }, 00:23:27.629 { 00:23:27.629 "name": "BaseBdev2", 00:23:27.629 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:27.629 "is_configured": true, 00:23:27.629 "data_offset": 0, 00:23:27.629 "data_size": 65536 00:23:27.629 } 00:23:27.629 ] 00:23:27.629 }' 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:27.629 "name": "raid_bdev1", 00:23:27.629 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:27.629 "strip_size_kb": 0, 00:23:27.629 "state": "online", 00:23:27.629 "raid_level": "raid1", 00:23:27.629 "superblock": false, 00:23:27.629 "num_base_bdevs": 2, 00:23:27.629 "num_base_bdevs_discovered": 2, 00:23:27.629 "num_base_bdevs_operational": 2, 00:23:27.629 "base_bdevs_list": [ 00:23:27.629 { 00:23:27.629 "name": "spare", 00:23:27.629 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:27.629 "is_configured": true, 00:23:27.629 "data_offset": 0, 00:23:27.629 "data_size": 65536 00:23:27.629 }, 00:23:27.629 { 00:23:27.629 "name": "BaseBdev2", 00:23:27.629 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:27.629 "is_configured": true, 00:23:27.629 "data_offset": 0, 00:23:27.629 "data_size": 65536 00:23:27.629 } 00:23:27.629 ] 00:23:27.629 }' 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:27.629 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.888 "name": "raid_bdev1", 00:23:27.888 "uuid": "0f92c2a5-7faf-4034-809d-e2bfa74ab857", 00:23:27.888 "strip_size_kb": 0, 00:23:27.888 "state": "online", 00:23:27.888 "raid_level": "raid1", 00:23:27.888 "superblock": false, 00:23:27.888 "num_base_bdevs": 2, 00:23:27.888 "num_base_bdevs_discovered": 2, 00:23:27.888 "num_base_bdevs_operational": 2, 00:23:27.888 "base_bdevs_list": [ 00:23:27.888 { 00:23:27.888 "name": "spare", 00:23:27.888 "uuid": "7f8d1ef2-767d-5fa4-bac6-9bb228afadf9", 00:23:27.888 "is_configured": true, 00:23:27.888 "data_offset": 0, 00:23:27.888 "data_size": 65536 00:23:27.888 }, 00:23:27.888 { 00:23:27.888 "name": "BaseBdev2", 00:23:27.888 "uuid": "14905a92-52e5-5413-94e8-f56bbdbd6f99", 00:23:27.888 "is_configured": true, 00:23:27.888 "data_offset": 0, 00:23:27.888 "data_size": 65536 00:23:27.888 } 00:23:27.888 ] 00:23:27.888 }' 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.888 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.147 79.38 IOPS, 238.12 MiB/s [2024-11-08T17:11:04.862Z] 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:28.147 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.147 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.147 [2024-11-08 17:11:04.689224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:28.147 [2024-11-08 17:11:04.689262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:28.147 00:23:28.147 Latency(us) 00:23:28.147 [2024-11-08T17:11:04.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.147 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:28.147 raid_bdev1 : 8.04 79.19 237.57 0.00 0.00 17025.10 324.53 114536.76 00:23:28.148 [2024-11-08T17:11:04.863Z] =================================================================================================================== 00:23:28.148 [2024-11-08T17:11:04.863Z] Total : 79.19 237.57 0.00 0.00 17025.10 324.53 114536.76 00:23:28.148 [2024-11-08 17:11:04.716154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.148 { 00:23:28.148 "results": [ 00:23:28.148 { 00:23:28.148 "job": "raid_bdev1", 00:23:28.148 "core_mask": "0x1", 00:23:28.148 "workload": "randrw", 00:23:28.148 "percentage": 50, 00:23:28.148 "status": "finished", 00:23:28.148 "queue_depth": 2, 00:23:28.148 "io_size": 3145728, 00:23:28.148 "runtime": 8.043993, 00:23:28.148 "iops": 79.18952689292495, 00:23:28.148 "mibps": 237.56858067877482, 00:23:28.148 "io_failed": 0, 00:23:28.148 "io_timeout": 0, 00:23:28.148 "avg_latency_us": 17025.09621543292, 00:23:28.148 "min_latency_us": 324.52923076923076, 00:23:28.148 "max_latency_us": 114536.76307692307 00:23:28.148 } 00:23:28.148 ], 00:23:28.148 "core_count": 1 00:23:28.148 } 00:23:28.148 [2024-11-08 17:11:04.716335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:28.148 [2024-11-08 17:11:04.716447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:28.148 [2024-11-08 17:11:04.716465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:28.148 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:23:28.406 /dev/nbd0 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:28.406 1+0 records in 00:23:28.406 1+0 records out 00:23:28.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046698 s, 8.8 MB/s 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:28.406 17:11:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:28.664 /dev/nbd1 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:28.664 1+0 records in 00:23:28.664 1+0 records out 00:23:28.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176722 s, 23.2 MB/s 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:28.664 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:28.923 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74932 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 74932 ']' 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 74932 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74932 00:23:29.181 killing process with pid 74932 00:23:29.181 Received shutdown signal, test time was about 9.188665 seconds 00:23:29.181 00:23:29.181 Latency(us) 00:23:29.181 [2024-11-08T17:11:05.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.181 [2024-11-08T17:11:05.896Z] =================================================================================================================== 00:23:29.181 [2024-11-08T17:11:05.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74932' 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 74932 00:23:29.181 [2024-11-08 17:11:05.845109] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:29.181 17:11:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 74932 00:23:29.437 [2024-11-08 17:11:05.993554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:23:30.369 00:23:30.369 real 0m11.628s 00:23:30.369 user 0m14.162s 00:23:30.369 sys 0m1.105s 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.369 ************************************ 00:23:30.369 END TEST raid_rebuild_test_io 00:23:30.369 ************************************ 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:30.369 17:11:06 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:23:30.369 17:11:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:30.369 17:11:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.369 17:11:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:30.369 ************************************ 00:23:30.369 START TEST raid_rebuild_test_sb_io 00:23:30.369 ************************************ 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:30.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=75319 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 75319 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 75319 ']' 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.369 17:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:30.369 [2024-11-08 17:11:06.937250] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:23:30.369 [2024-11-08 17:11:06.937807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75319 ] 00:23:30.369 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:30.369 Zero copy mechanism will not be used. 00:23:30.627 [2024-11-08 17:11:07.115961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.627 [2024-11-08 17:11:07.253110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.885 [2024-11-08 17:11:07.400315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.885 [2024-11-08 17:11:07.400540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.143 BaseBdev1_malloc 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.143 [2024-11-08 17:11:07.796422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:31.143 [2024-11-08 17:11:07.796487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.143 [2024-11-08 17:11:07.796513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:31.143 [2024-11-08 17:11:07.796526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.143 [2024-11-08 17:11:07.798819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.143 [2024-11-08 17:11:07.798855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:31.143 BaseBdev1 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.143 BaseBdev2_malloc 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.143 [2024-11-08 17:11:07.834284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:31.143 [2024-11-08 17:11:07.834444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.143 [2024-11-08 17:11:07.834468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:31.143 [2024-11-08 17:11:07.834481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.143 [2024-11-08 17:11:07.836690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.143 [2024-11-08 17:11:07.836721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:31.143 BaseBdev2 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.143 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.401 spare_malloc 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.401 spare_delay 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.401 [2024-11-08 17:11:07.897516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:31.401 [2024-11-08 17:11:07.897571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.401 [2024-11-08 17:11:07.897592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:31.401 [2024-11-08 17:11:07.897604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.401 [2024-11-08 17:11:07.899865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.401 [2024-11-08 17:11:07.899900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:31.401 spare 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.401 [2024-11-08 17:11:07.905588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.401 [2024-11-08 17:11:07.907593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:31.401 [2024-11-08 17:11:07.907775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:31.401 [2024-11-08 17:11:07.907792] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:31.401 [2024-11-08 17:11:07.908052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:31.401 [2024-11-08 17:11:07.908216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:31.401 [2024-11-08 17:11:07.908226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:31.401 [2024-11-08 17:11:07.908364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.401 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.402 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.402 "name": "raid_bdev1", 00:23:31.402 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:31.402 "strip_size_kb": 0, 00:23:31.402 "state": "online", 00:23:31.402 "raid_level": "raid1", 00:23:31.402 "superblock": true, 00:23:31.402 "num_base_bdevs": 2, 00:23:31.402 "num_base_bdevs_discovered": 2, 00:23:31.402 "num_base_bdevs_operational": 2, 00:23:31.402 "base_bdevs_list": [ 00:23:31.402 { 00:23:31.402 "name": "BaseBdev1", 00:23:31.402 "uuid": "7f66b92c-e9a4-56ff-8305-73626effc635", 00:23:31.402 "is_configured": true, 00:23:31.402 "data_offset": 2048, 00:23:31.402 "data_size": 63488 00:23:31.402 }, 00:23:31.402 { 00:23:31.402 "name": "BaseBdev2", 00:23:31.402 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:31.402 "is_configured": true, 00:23:31.402 "data_offset": 2048, 00:23:31.402 "data_size": 63488 00:23:31.402 } 00:23:31.402 ] 00:23:31.402 }' 00:23:31.402 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.402 17:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.668 [2024-11-08 17:11:08.233981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.668 [2024-11-08 17:11:08.305658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.668 "name": "raid_bdev1", 00:23:31.668 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:31.668 "strip_size_kb": 0, 00:23:31.668 "state": "online", 00:23:31.668 "raid_level": "raid1", 00:23:31.668 "superblock": true, 00:23:31.668 "num_base_bdevs": 2, 00:23:31.668 "num_base_bdevs_discovered": 1, 00:23:31.668 "num_base_bdevs_operational": 1, 00:23:31.668 "base_bdevs_list": [ 00:23:31.668 { 00:23:31.668 "name": null, 00:23:31.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.668 "is_configured": false, 00:23:31.668 "data_offset": 0, 00:23:31.668 "data_size": 63488 00:23:31.668 }, 00:23:31.668 { 00:23:31.668 "name": "BaseBdev2", 00:23:31.668 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:31.668 "is_configured": true, 00:23:31.668 "data_offset": 2048, 00:23:31.668 "data_size": 63488 00:23:31.668 } 00:23:31.668 ] 00:23:31.668 }' 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.668 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.939 [2024-11-08 17:11:08.395486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:31.939 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:31.939 Zero copy mechanism will not be used. 00:23:31.939 Running I/O for 60 seconds... 00:23:31.939 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:31.939 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.939 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.939 [2024-11-08 17:11:08.625149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.197 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.197 17:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:32.197 [2024-11-08 17:11:08.688593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:32.197 [2024-11-08 17:11:08.690646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:32.197 [2024-11-08 17:11:08.819111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:32.197 [2024-11-08 17:11:08.819717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:32.454 [2024-11-08 17:11:08.935643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:32.454 [2024-11-08 17:11:08.936095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:32.979 141.00 IOPS, 423.00 MiB/s [2024-11-08T17:11:09.694Z] [2024-11-08 17:11:09.444245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.236 "name": "raid_bdev1", 00:23:33.236 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:33.236 "strip_size_kb": 0, 00:23:33.236 "state": "online", 00:23:33.236 "raid_level": "raid1", 00:23:33.236 "superblock": true, 00:23:33.236 "num_base_bdevs": 2, 00:23:33.236 "num_base_bdevs_discovered": 2, 00:23:33.236 "num_base_bdevs_operational": 2, 00:23:33.236 "process": { 00:23:33.236 "type": "rebuild", 00:23:33.236 "target": "spare", 00:23:33.236 "progress": { 00:23:33.236 "blocks": 12288, 00:23:33.236 "percent": 19 00:23:33.236 } 00:23:33.236 }, 00:23:33.236 "base_bdevs_list": [ 00:23:33.236 { 00:23:33.236 "name": "spare", 00:23:33.236 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:33.236 "is_configured": true, 00:23:33.236 "data_offset": 2048, 00:23:33.236 "data_size": 63488 00:23:33.236 }, 00:23:33.236 { 00:23:33.236 "name": "BaseBdev2", 00:23:33.236 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:33.236 "is_configured": true, 00:23:33.236 "data_offset": 2048, 00:23:33.236 "data_size": 63488 00:23:33.236 } 00:23:33.236 ] 00:23:33.236 }' 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.236 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:33.236 [2024-11-08 17:11:09.757744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.236 [2024-11-08 17:11:09.783388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:33.236 [2024-11-08 17:11:09.841790] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:33.237 [2024-11-08 17:11:09.857582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.237 [2024-11-08 17:11:09.857782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.237 [2024-11-08 17:11:09.857809] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:33.237 [2024-11-08 17:11:09.892322] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.237 "name": "raid_bdev1", 00:23:33.237 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:33.237 "strip_size_kb": 0, 00:23:33.237 "state": "online", 00:23:33.237 "raid_level": "raid1", 00:23:33.237 "superblock": true, 00:23:33.237 "num_base_bdevs": 2, 00:23:33.237 "num_base_bdevs_discovered": 1, 00:23:33.237 "num_base_bdevs_operational": 1, 00:23:33.237 "base_bdevs_list": [ 00:23:33.237 { 00:23:33.237 "name": null, 00:23:33.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.237 "is_configured": false, 00:23:33.237 "data_offset": 0, 00:23:33.237 "data_size": 63488 00:23:33.237 }, 00:23:33.237 { 00:23:33.237 "name": "BaseBdev2", 00:23:33.237 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:33.237 "is_configured": true, 00:23:33.237 "data_offset": 2048, 00:23:33.237 "data_size": 63488 00:23:33.237 } 00:23:33.237 ] 00:23:33.237 }' 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.237 17:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.802 "name": "raid_bdev1", 00:23:33.802 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:33.802 "strip_size_kb": 0, 00:23:33.802 "state": "online", 00:23:33.802 "raid_level": "raid1", 00:23:33.802 "superblock": true, 00:23:33.802 "num_base_bdevs": 2, 00:23:33.802 "num_base_bdevs_discovered": 1, 00:23:33.802 "num_base_bdevs_operational": 1, 00:23:33.802 "base_bdevs_list": [ 00:23:33.802 { 00:23:33.802 "name": null, 00:23:33.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.802 "is_configured": false, 00:23:33.802 "data_offset": 0, 00:23:33.802 "data_size": 63488 00:23:33.802 }, 00:23:33.802 { 00:23:33.802 "name": "BaseBdev2", 00:23:33.802 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:33.802 "is_configured": true, 00:23:33.802 "data_offset": 2048, 00:23:33.802 "data_size": 63488 00:23:33.802 } 00:23:33.802 ] 00:23:33.802 }' 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:33.802 [2024-11-08 17:11:10.346924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.802 17:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:33.802 [2024-11-08 17:11:10.411519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:33.802 [2024-11-08 17:11:10.413682] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:34.059 157.50 IOPS, 472.50 MiB/s [2024-11-08T17:11:10.774Z] [2024-11-08 17:11:10.528711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:34.059 [2024-11-08 17:11:10.529415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:34.059 [2024-11-08 17:11:10.752893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:34.059 [2024-11-08 17:11:10.753391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:34.624 [2024-11-08 17:11:11.087068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:34.624 [2024-11-08 17:11:11.318126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:34.624 [2024-11-08 17:11:11.318635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.882 127.67 IOPS, 383.00 MiB/s [2024-11-08T17:11:11.597Z] 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.882 "name": "raid_bdev1", 00:23:34.882 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:34.882 "strip_size_kb": 0, 00:23:34.882 "state": "online", 00:23:34.882 "raid_level": "raid1", 00:23:34.882 "superblock": true, 00:23:34.882 "num_base_bdevs": 2, 00:23:34.882 "num_base_bdevs_discovered": 2, 00:23:34.882 "num_base_bdevs_operational": 2, 00:23:34.882 "process": { 00:23:34.882 "type": "rebuild", 00:23:34.882 "target": "spare", 00:23:34.882 "progress": { 00:23:34.882 "blocks": 10240, 00:23:34.882 "percent": 16 00:23:34.882 } 00:23:34.882 }, 00:23:34.882 "base_bdevs_list": [ 00:23:34.882 { 00:23:34.882 "name": "spare", 00:23:34.882 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:34.882 "is_configured": true, 00:23:34.882 "data_offset": 2048, 00:23:34.882 "data_size": 63488 00:23:34.882 }, 00:23:34.882 { 00:23:34.882 "name": "BaseBdev2", 00:23:34.882 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:34.882 "is_configured": true, 00:23:34.882 "data_offset": 2048, 00:23:34.882 "data_size": 63488 00:23:34.882 } 00:23:34.882 ] 00:23:34.882 }' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:34.882 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=359 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.882 "name": "raid_bdev1", 00:23:34.882 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:34.882 "strip_size_kb": 0, 00:23:34.882 "state": "online", 00:23:34.882 "raid_level": "raid1", 00:23:34.882 "superblock": true, 00:23:34.882 "num_base_bdevs": 2, 00:23:34.882 "num_base_bdevs_discovered": 2, 00:23:34.882 "num_base_bdevs_operational": 2, 00:23:34.882 "process": { 00:23:34.882 "type": "rebuild", 00:23:34.882 "target": "spare", 00:23:34.882 "progress": { 00:23:34.882 "blocks": 10240, 00:23:34.882 "percent": 16 00:23:34.882 } 00:23:34.882 }, 00:23:34.882 "base_bdevs_list": [ 00:23:34.882 { 00:23:34.882 "name": "spare", 00:23:34.882 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:34.882 "is_configured": true, 00:23:34.882 "data_offset": 2048, 00:23:34.882 "data_size": 63488 00:23:34.882 }, 00:23:34.882 { 00:23:34.882 "name": "BaseBdev2", 00:23:34.882 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:34.882 "is_configured": true, 00:23:34.882 "data_offset": 2048, 00:23:34.882 "data_size": 63488 00:23:34.882 } 00:23:34.882 ] 00:23:34.882 }' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.882 17:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:35.140 [2024-11-08 17:11:11.768013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:35.140 [2024-11-08 17:11:11.768320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:35.706 [2024-11-08 17:11:12.256989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:35.963 113.00 IOPS, 339.00 MiB/s [2024-11-08T17:11:12.678Z] 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:35.963 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.964 "name": "raid_bdev1", 00:23:35.964 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:35.964 "strip_size_kb": 0, 00:23:35.964 "state": "online", 00:23:35.964 "raid_level": "raid1", 00:23:35.964 "superblock": true, 00:23:35.964 "num_base_bdevs": 2, 00:23:35.964 "num_base_bdevs_discovered": 2, 00:23:35.964 "num_base_bdevs_operational": 2, 00:23:35.964 "process": { 00:23:35.964 "type": "rebuild", 00:23:35.964 "target": "spare", 00:23:35.964 "progress": { 00:23:35.964 "blocks": 26624, 00:23:35.964 "percent": 41 00:23:35.964 } 00:23:35.964 }, 00:23:35.964 "base_bdevs_list": [ 00:23:35.964 { 00:23:35.964 "name": "spare", 00:23:35.964 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:35.964 "is_configured": true, 00:23:35.964 "data_offset": 2048, 00:23:35.964 "data_size": 63488 00:23:35.964 }, 00:23:35.964 { 00:23:35.964 "name": "BaseBdev2", 00:23:35.964 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:35.964 "is_configured": true, 00:23:35.964 "data_offset": 2048, 00:23:35.964 "data_size": 63488 00:23:35.964 } 00:23:35.964 ] 00:23:35.964 }' 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:35.964 [2024-11-08 17:11:12.647616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:35.964 [2024-11-08 17:11:12.648117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.964 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.222 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.222 17:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:36.479 [2024-11-08 17:11:12.991482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:36.994 101.80 IOPS, 305.40 MiB/s [2024-11-08T17:11:13.709Z] 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:36.994 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.251 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:37.251 "name": "raid_bdev1", 00:23:37.251 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:37.251 "strip_size_kb": 0, 00:23:37.251 "state": "online", 00:23:37.251 "raid_level": "raid1", 00:23:37.251 "superblock": true, 00:23:37.251 "num_base_bdevs": 2, 00:23:37.251 "num_base_bdevs_discovered": 2, 00:23:37.251 "num_base_bdevs_operational": 2, 00:23:37.251 "process": { 00:23:37.251 "type": "rebuild", 00:23:37.251 "target": "spare", 00:23:37.251 "progress": { 00:23:37.251 "blocks": 45056, 00:23:37.251 "percent": 70 00:23:37.251 } 00:23:37.251 }, 00:23:37.251 "base_bdevs_list": [ 00:23:37.252 { 00:23:37.252 "name": "spare", 00:23:37.252 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:37.252 "is_configured": true, 00:23:37.252 "data_offset": 2048, 00:23:37.252 "data_size": 63488 00:23:37.252 }, 00:23:37.252 { 00:23:37.252 "name": "BaseBdev2", 00:23:37.252 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:37.252 "is_configured": true, 00:23:37.252 "data_offset": 2048, 00:23:37.252 "data_size": 63488 00:23:37.252 } 00:23:37.252 ] 00:23:37.252 }' 00:23:37.252 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:37.252 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.252 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:37.252 [2024-11-08 17:11:13.787954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:37.252 [2024-11-08 17:11:13.788329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:37.252 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.252 17:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:37.510 [2024-11-08 17:11:14.022484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:37.769 [2024-11-08 17:11:14.241581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:38.335 90.83 IOPS, 272.50 MiB/s [2024-11-08T17:11:15.050Z] 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:38.335 "name": "raid_bdev1", 00:23:38.335 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:38.335 "strip_size_kb": 0, 00:23:38.335 "state": "online", 00:23:38.335 "raid_level": "raid1", 00:23:38.335 "superblock": true, 00:23:38.335 "num_base_bdevs": 2, 00:23:38.335 "num_base_bdevs_discovered": 2, 00:23:38.335 "num_base_bdevs_operational": 2, 00:23:38.335 "process": { 00:23:38.335 "type": "rebuild", 00:23:38.335 "target": "spare", 00:23:38.335 "progress": { 00:23:38.335 "blocks": 61440, 00:23:38.335 "percent": 96 00:23:38.335 } 00:23:38.335 }, 00:23:38.335 "base_bdevs_list": [ 00:23:38.335 { 00:23:38.335 "name": "spare", 00:23:38.335 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:38.335 "is_configured": true, 00:23:38.335 "data_offset": 2048, 00:23:38.335 "data_size": 63488 00:23:38.335 }, 00:23:38.335 { 00:23:38.335 "name": "BaseBdev2", 00:23:38.335 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:38.335 "is_configured": true, 00:23:38.335 "data_offset": 2048, 00:23:38.335 "data_size": 63488 00:23:38.335 } 00:23:38.335 ] 00:23:38.335 }' 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.335 17:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:38.335 [2024-11-08 17:11:14.902811] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:38.335 [2024-11-08 17:11:15.009267] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:38.335 [2024-11-08 17:11:15.011458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.497 82.71 IOPS, 248.14 MiB/s [2024-11-08T17:11:16.212Z] 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.497 "name": "raid_bdev1", 00:23:39.497 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:39.497 "strip_size_kb": 0, 00:23:39.497 "state": "online", 00:23:39.497 "raid_level": "raid1", 00:23:39.497 "superblock": true, 00:23:39.497 "num_base_bdevs": 2, 00:23:39.497 "num_base_bdevs_discovered": 2, 00:23:39.497 "num_base_bdevs_operational": 2, 00:23:39.497 "base_bdevs_list": [ 00:23:39.497 { 00:23:39.497 "name": "spare", 00:23:39.497 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:39.497 "is_configured": true, 00:23:39.497 "data_offset": 2048, 00:23:39.497 "data_size": 63488 00:23:39.497 }, 00:23:39.497 { 00:23:39.497 "name": "BaseBdev2", 00:23:39.497 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:39.497 "is_configured": true, 00:23:39.497 "data_offset": 2048, 00:23:39.497 "data_size": 63488 00:23:39.497 } 00:23:39.497 ] 00:23:39.497 }' 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:39.497 17:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.497 "name": "raid_bdev1", 00:23:39.497 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:39.497 "strip_size_kb": 0, 00:23:39.497 "state": "online", 00:23:39.497 "raid_level": "raid1", 00:23:39.497 "superblock": true, 00:23:39.497 "num_base_bdevs": 2, 00:23:39.497 "num_base_bdevs_discovered": 2, 00:23:39.497 "num_base_bdevs_operational": 2, 00:23:39.497 "base_bdevs_list": [ 00:23:39.497 { 00:23:39.497 "name": "spare", 00:23:39.497 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:39.497 "is_configured": true, 00:23:39.497 "data_offset": 2048, 00:23:39.497 "data_size": 63488 00:23:39.497 }, 00:23:39.497 { 00:23:39.497 "name": "BaseBdev2", 00:23:39.497 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:39.497 "is_configured": true, 00:23:39.497 "data_offset": 2048, 00:23:39.497 "data_size": 63488 00:23:39.497 } 00:23:39.497 ] 00:23:39.497 }' 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.497 "name": "raid_bdev1", 00:23:39.497 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:39.497 "strip_size_kb": 0, 00:23:39.497 "state": "online", 00:23:39.497 "raid_level": "raid1", 00:23:39.497 "superblock": true, 00:23:39.497 "num_base_bdevs": 2, 00:23:39.497 "num_base_bdevs_discovered": 2, 00:23:39.497 "num_base_bdevs_operational": 2, 00:23:39.497 "base_bdevs_list": [ 00:23:39.497 { 00:23:39.497 "name": "spare", 00:23:39.497 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:39.497 "is_configured": true, 00:23:39.497 "data_offset": 2048, 00:23:39.497 "data_size": 63488 00:23:39.497 }, 00:23:39.497 { 00:23:39.497 "name": "BaseBdev2", 00:23:39.497 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:39.497 "is_configured": true, 00:23:39.497 "data_offset": 2048, 00:23:39.497 "data_size": 63488 00:23:39.497 } 00:23:39.497 ] 00:23:39.497 }' 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.497 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.755 76.38 IOPS, 229.12 MiB/s [2024-11-08T17:11:16.470Z] 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:39.755 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.755 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.755 [2024-11-08 17:11:16.427202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.755 [2024-11-08 17:11:16.427231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:40.013 00:23:40.013 Latency(us) 00:23:40.013 [2024-11-08T17:11:16.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.013 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:40.013 raid_bdev1 : 8.12 75.64 226.92 0.00 0.00 18600.10 296.17 116956.55 00:23:40.013 [2024-11-08T17:11:16.728Z] =================================================================================================================== 00:23:40.013 [2024-11-08T17:11:16.728Z] Total : 75.64 226.92 0.00 0.00 18600.10 296.17 116956.55 00:23:40.013 { 00:23:40.013 "results": [ 00:23:40.013 { 00:23:40.013 "job": "raid_bdev1", 00:23:40.013 "core_mask": "0x1", 00:23:40.013 "workload": "randrw", 00:23:40.013 "percentage": 50, 00:23:40.013 "status": "finished", 00:23:40.013 "queue_depth": 2, 00:23:40.013 "io_size": 3145728, 00:23:40.013 "runtime": 8.11751, 00:23:40.013 "iops": 75.63895825197628, 00:23:40.013 "mibps": 226.91687475592883, 00:23:40.013 "io_failed": 0, 00:23:40.013 "io_timeout": 0, 00:23:40.013 "avg_latency_us": 18600.097900275618, 00:23:40.013 "min_latency_us": 296.1723076923077, 00:23:40.013 "max_latency_us": 116956.55384615385 00:23:40.013 } 00:23:40.013 ], 00:23:40.013 "core_count": 1 00:23:40.013 } 00:23:40.013 [2024-11-08 17:11:16.531290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.013 [2024-11-08 17:11:16.531350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.013 [2024-11-08 17:11:16.531446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:40.013 [2024-11-08 17:11:16.531460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:40.013 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.014 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:23:40.271 /dev/nbd0 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.271 1+0 records in 00:23:40.271 1+0 records out 00:23:40.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586949 s, 7.0 MB/s 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:23:40.271 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.272 17:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:40.529 /dev/nbd1 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.529 1+0 records in 00:23:40.529 1+0 records out 00:23:40.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598932 s, 6.8 MB/s 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.529 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.787 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.044 [2024-11-08 17:11:17.652605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:41.044 [2024-11-08 17:11:17.652668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.044 [2024-11-08 17:11:17.652692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:41.044 [2024-11-08 17:11:17.652705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.044 [2024-11-08 17:11:17.655165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.044 [2024-11-08 17:11:17.655204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:41.044 [2024-11-08 17:11:17.655311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:41.044 [2024-11-08 17:11:17.655362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.044 [2024-11-08 17:11:17.655496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:41.044 spare 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.044 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.044 [2024-11-08 17:11:17.755617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:41.044 [2024-11-08 17:11:17.755676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:41.044 [2024-11-08 17:11:17.756085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:23:41.044 [2024-11-08 17:11:17.756285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:41.044 [2024-11-08 17:11:17.756301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:41.044 [2024-11-08 17:11:17.756493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.302 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.302 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:41.302 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.302 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.302 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.303 "name": "raid_bdev1", 00:23:41.303 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:41.303 "strip_size_kb": 0, 00:23:41.303 "state": "online", 00:23:41.303 "raid_level": "raid1", 00:23:41.303 "superblock": true, 00:23:41.303 "num_base_bdevs": 2, 00:23:41.303 "num_base_bdevs_discovered": 2, 00:23:41.303 "num_base_bdevs_operational": 2, 00:23:41.303 "base_bdevs_list": [ 00:23:41.303 { 00:23:41.303 "name": "spare", 00:23:41.303 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:41.303 "is_configured": true, 00:23:41.303 "data_offset": 2048, 00:23:41.303 "data_size": 63488 00:23:41.303 }, 00:23:41.303 { 00:23:41.303 "name": "BaseBdev2", 00:23:41.303 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:41.303 "is_configured": true, 00:23:41.303 "data_offset": 2048, 00:23:41.303 "data_size": 63488 00:23:41.303 } 00:23:41.303 ] 00:23:41.303 }' 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.303 17:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.561 "name": "raid_bdev1", 00:23:41.561 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:41.561 "strip_size_kb": 0, 00:23:41.561 "state": "online", 00:23:41.561 "raid_level": "raid1", 00:23:41.561 "superblock": true, 00:23:41.561 "num_base_bdevs": 2, 00:23:41.561 "num_base_bdevs_discovered": 2, 00:23:41.561 "num_base_bdevs_operational": 2, 00:23:41.561 "base_bdevs_list": [ 00:23:41.561 { 00:23:41.561 "name": "spare", 00:23:41.561 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:41.561 "is_configured": true, 00:23:41.561 "data_offset": 2048, 00:23:41.561 "data_size": 63488 00:23:41.561 }, 00:23:41.561 { 00:23:41.561 "name": "BaseBdev2", 00:23:41.561 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:41.561 "is_configured": true, 00:23:41.561 "data_offset": 2048, 00:23:41.561 "data_size": 63488 00:23:41.561 } 00:23:41.561 ] 00:23:41.561 }' 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 [2024-11-08 17:11:18.200877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.561 "name": "raid_bdev1", 00:23:41.561 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:41.561 "strip_size_kb": 0, 00:23:41.561 "state": "online", 00:23:41.561 "raid_level": "raid1", 00:23:41.561 "superblock": true, 00:23:41.561 "num_base_bdevs": 2, 00:23:41.561 "num_base_bdevs_discovered": 1, 00:23:41.561 "num_base_bdevs_operational": 1, 00:23:41.561 "base_bdevs_list": [ 00:23:41.561 { 00:23:41.561 "name": null, 00:23:41.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.561 "is_configured": false, 00:23:41.561 "data_offset": 0, 00:23:41.561 "data_size": 63488 00:23:41.561 }, 00:23:41.561 { 00:23:41.561 "name": "BaseBdev2", 00:23:41.561 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:41.561 "is_configured": true, 00:23:41.561 "data_offset": 2048, 00:23:41.561 "data_size": 63488 00:23:41.561 } 00:23:41.561 ] 00:23:41.561 }' 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.561 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:42.127 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:42.127 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.127 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:42.127 [2024-11-08 17:11:18.545051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.127 [2024-11-08 17:11:18.545265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:42.127 [2024-11-08 17:11:18.545280] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:42.127 [2024-11-08 17:11:18.545320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.127 [2024-11-08 17:11:18.557118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:23:42.127 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.127 17:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:42.127 [2024-11-08 17:11:18.559140] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.068 "name": "raid_bdev1", 00:23:43.068 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:43.068 "strip_size_kb": 0, 00:23:43.068 "state": "online", 00:23:43.068 "raid_level": "raid1", 00:23:43.068 "superblock": true, 00:23:43.068 "num_base_bdevs": 2, 00:23:43.068 "num_base_bdevs_discovered": 2, 00:23:43.068 "num_base_bdevs_operational": 2, 00:23:43.068 "process": { 00:23:43.068 "type": "rebuild", 00:23:43.068 "target": "spare", 00:23:43.068 "progress": { 00:23:43.068 "blocks": 20480, 00:23:43.068 "percent": 32 00:23:43.068 } 00:23:43.068 }, 00:23:43.068 "base_bdevs_list": [ 00:23:43.068 { 00:23:43.068 "name": "spare", 00:23:43.068 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:43.068 "is_configured": true, 00:23:43.068 "data_offset": 2048, 00:23:43.068 "data_size": 63488 00:23:43.068 }, 00:23:43.068 { 00:23:43.068 "name": "BaseBdev2", 00:23:43.068 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:43.068 "is_configured": true, 00:23:43.068 "data_offset": 2048, 00:23:43.068 "data_size": 63488 00:23:43.068 } 00:23:43.068 ] 00:23:43.068 }' 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.068 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.068 [2024-11-08 17:11:19.669653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:43.068 [2024-11-08 17:11:19.766705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:43.068 [2024-11-08 17:11:19.766795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.068 [2024-11-08 17:11:19.766815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:43.068 [2024-11-08 17:11:19.766823] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.329 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.329 "name": "raid_bdev1", 00:23:43.329 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:43.330 "strip_size_kb": 0, 00:23:43.330 "state": "online", 00:23:43.330 "raid_level": "raid1", 00:23:43.330 "superblock": true, 00:23:43.330 "num_base_bdevs": 2, 00:23:43.330 "num_base_bdevs_discovered": 1, 00:23:43.330 "num_base_bdevs_operational": 1, 00:23:43.330 "base_bdevs_list": [ 00:23:43.330 { 00:23:43.330 "name": null, 00:23:43.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.330 "is_configured": false, 00:23:43.330 "data_offset": 0, 00:23:43.330 "data_size": 63488 00:23:43.330 }, 00:23:43.330 { 00:23:43.330 "name": "BaseBdev2", 00:23:43.330 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:43.330 "is_configured": true, 00:23:43.330 "data_offset": 2048, 00:23:43.330 "data_size": 63488 00:23:43.330 } 00:23:43.330 ] 00:23:43.330 }' 00:23:43.330 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.330 17:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.588 17:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:43.588 17:11:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.588 17:11:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.588 [2024-11-08 17:11:20.137791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:43.588 [2024-11-08 17:11:20.137868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.588 [2024-11-08 17:11:20.137896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:43.588 [2024-11-08 17:11:20.137906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.588 [2024-11-08 17:11:20.138432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.588 [2024-11-08 17:11:20.138456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:43.589 [2024-11-08 17:11:20.138562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:43.589 [2024-11-08 17:11:20.138576] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:43.589 [2024-11-08 17:11:20.138591] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:43.589 [2024-11-08 17:11:20.138612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:43.589 [2024-11-08 17:11:20.150490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:23:43.589 spare 00:23:43.589 17:11:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.589 17:11:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:43.589 [2024-11-08 17:11:20.152547] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:44.521 "name": "raid_bdev1", 00:23:44.521 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:44.521 "strip_size_kb": 0, 00:23:44.521 "state": "online", 00:23:44.521 "raid_level": "raid1", 00:23:44.521 "superblock": true, 00:23:44.521 "num_base_bdevs": 2, 00:23:44.521 "num_base_bdevs_discovered": 2, 00:23:44.521 "num_base_bdevs_operational": 2, 00:23:44.521 "process": { 00:23:44.521 "type": "rebuild", 00:23:44.521 "target": "spare", 00:23:44.521 "progress": { 00:23:44.521 "blocks": 20480, 00:23:44.521 "percent": 32 00:23:44.521 } 00:23:44.521 }, 00:23:44.521 "base_bdevs_list": [ 00:23:44.521 { 00:23:44.521 "name": "spare", 00:23:44.521 "uuid": "4c3822eb-562c-5dc6-90da-25b1225bdd52", 00:23:44.521 "is_configured": true, 00:23:44.521 "data_offset": 2048, 00:23:44.521 "data_size": 63488 00:23:44.521 }, 00:23:44.521 { 00:23:44.521 "name": "BaseBdev2", 00:23:44.521 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:44.521 "is_configured": true, 00:23:44.521 "data_offset": 2048, 00:23:44.521 "data_size": 63488 00:23:44.521 } 00:23:44.521 ] 00:23:44.521 }' 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.521 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.781 [2024-11-08 17:11:21.259009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.781 [2024-11-08 17:11:21.259392] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:44.781 [2024-11-08 17:11:21.259449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.781 [2024-11-08 17:11:21.259465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.781 [2024-11-08 17:11:21.259476] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.781 "name": "raid_bdev1", 00:23:44.781 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:44.781 "strip_size_kb": 0, 00:23:44.781 "state": "online", 00:23:44.781 "raid_level": "raid1", 00:23:44.781 "superblock": true, 00:23:44.781 "num_base_bdevs": 2, 00:23:44.781 "num_base_bdevs_discovered": 1, 00:23:44.781 "num_base_bdevs_operational": 1, 00:23:44.781 "base_bdevs_list": [ 00:23:44.781 { 00:23:44.781 "name": null, 00:23:44.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.781 "is_configured": false, 00:23:44.781 "data_offset": 0, 00:23:44.781 "data_size": 63488 00:23:44.781 }, 00:23:44.781 { 00:23:44.781 "name": "BaseBdev2", 00:23:44.781 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:44.781 "is_configured": true, 00:23:44.781 "data_offset": 2048, 00:23:44.781 "data_size": 63488 00:23:44.781 } 00:23:44.781 ] 00:23:44.781 }' 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.781 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.039 "name": "raid_bdev1", 00:23:45.039 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:45.039 "strip_size_kb": 0, 00:23:45.039 "state": "online", 00:23:45.039 "raid_level": "raid1", 00:23:45.039 "superblock": true, 00:23:45.039 "num_base_bdevs": 2, 00:23:45.039 "num_base_bdevs_discovered": 1, 00:23:45.039 "num_base_bdevs_operational": 1, 00:23:45.039 "base_bdevs_list": [ 00:23:45.039 { 00:23:45.039 "name": null, 00:23:45.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.039 "is_configured": false, 00:23:45.039 "data_offset": 0, 00:23:45.039 "data_size": 63488 00:23:45.039 }, 00:23:45.039 { 00:23:45.039 "name": "BaseBdev2", 00:23:45.039 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:45.039 "is_configured": true, 00:23:45.039 "data_offset": 2048, 00:23:45.039 "data_size": 63488 00:23:45.039 } 00:23:45.039 ] 00:23:45.039 }' 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.039 [2024-11-08 17:11:21.742721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:45.039 [2024-11-08 17:11:21.742793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.039 [2024-11-08 17:11:21.742816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:45.039 [2024-11-08 17:11:21.742828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.039 [2024-11-08 17:11:21.743305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.039 [2024-11-08 17:11:21.743328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:45.039 [2024-11-08 17:11:21.743413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:45.039 [2024-11-08 17:11:21.743435] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:45.039 [2024-11-08 17:11:21.743443] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:45.039 [2024-11-08 17:11:21.743459] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:45.039 BaseBdev1 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.039 17:11:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:46.432 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.432 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.432 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.433 "name": "raid_bdev1", 00:23:46.433 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:46.433 "strip_size_kb": 0, 00:23:46.433 "state": "online", 00:23:46.433 "raid_level": "raid1", 00:23:46.433 "superblock": true, 00:23:46.433 "num_base_bdevs": 2, 00:23:46.433 "num_base_bdevs_discovered": 1, 00:23:46.433 "num_base_bdevs_operational": 1, 00:23:46.433 "base_bdevs_list": [ 00:23:46.433 { 00:23:46.433 "name": null, 00:23:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.433 "is_configured": false, 00:23:46.433 "data_offset": 0, 00:23:46.433 "data_size": 63488 00:23:46.433 }, 00:23:46.433 { 00:23:46.433 "name": "BaseBdev2", 00:23:46.433 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:46.433 "is_configured": true, 00:23:46.433 "data_offset": 2048, 00:23:46.433 "data_size": 63488 00:23:46.433 } 00:23:46.433 ] 00:23:46.433 }' 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.433 17:11:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.433 "name": "raid_bdev1", 00:23:46.433 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:46.433 "strip_size_kb": 0, 00:23:46.433 "state": "online", 00:23:46.433 "raid_level": "raid1", 00:23:46.433 "superblock": true, 00:23:46.433 "num_base_bdevs": 2, 00:23:46.433 "num_base_bdevs_discovered": 1, 00:23:46.433 "num_base_bdevs_operational": 1, 00:23:46.433 "base_bdevs_list": [ 00:23:46.433 { 00:23:46.433 "name": null, 00:23:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.433 "is_configured": false, 00:23:46.433 "data_offset": 0, 00:23:46.433 "data_size": 63488 00:23:46.433 }, 00:23:46.433 { 00:23:46.433 "name": "BaseBdev2", 00:23:46.433 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:46.433 "is_configured": true, 00:23:46.433 "data_offset": 2048, 00:23:46.433 "data_size": 63488 00:23:46.433 } 00:23:46.433 ] 00:23:46.433 }' 00:23:46.433 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.719 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.719 [2024-11-08 17:11:23.187278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:46.719 [2024-11-08 17:11:23.187456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:46.719 [2024-11-08 17:11:23.187469] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:46.719 request: 00:23:46.719 { 00:23:46.719 "base_bdev": "BaseBdev1", 00:23:46.719 "raid_bdev": "raid_bdev1", 00:23:46.719 "method": "bdev_raid_add_base_bdev", 00:23:46.719 "req_id": 1 00:23:46.719 } 00:23:46.719 Got JSON-RPC error response 00:23:46.719 response: 00:23:46.720 { 00:23:46.720 "code": -22, 00:23:46.720 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:46.720 } 00:23:46.720 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:46.720 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:23:46.720 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:46.720 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:46.720 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:46.720 17:11:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.653 "name": "raid_bdev1", 00:23:47.653 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:47.653 "strip_size_kb": 0, 00:23:47.653 "state": "online", 00:23:47.653 "raid_level": "raid1", 00:23:47.653 "superblock": true, 00:23:47.653 "num_base_bdevs": 2, 00:23:47.653 "num_base_bdevs_discovered": 1, 00:23:47.653 "num_base_bdevs_operational": 1, 00:23:47.653 "base_bdevs_list": [ 00:23:47.653 { 00:23:47.653 "name": null, 00:23:47.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.653 "is_configured": false, 00:23:47.653 "data_offset": 0, 00:23:47.653 "data_size": 63488 00:23:47.653 }, 00:23:47.653 { 00:23:47.653 "name": "BaseBdev2", 00:23:47.653 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:47.653 "is_configured": true, 00:23:47.653 "data_offset": 2048, 00:23:47.653 "data_size": 63488 00:23:47.653 } 00:23:47.653 ] 00:23:47.653 }' 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.653 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.912 "name": "raid_bdev1", 00:23:47.912 "uuid": "b02d0366-1969-40af-a810-f042f987b4d7", 00:23:47.912 "strip_size_kb": 0, 00:23:47.912 "state": "online", 00:23:47.912 "raid_level": "raid1", 00:23:47.912 "superblock": true, 00:23:47.912 "num_base_bdevs": 2, 00:23:47.912 "num_base_bdevs_discovered": 1, 00:23:47.912 "num_base_bdevs_operational": 1, 00:23:47.912 "base_bdevs_list": [ 00:23:47.912 { 00:23:47.912 "name": null, 00:23:47.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.912 "is_configured": false, 00:23:47.912 "data_offset": 0, 00:23:47.912 "data_size": 63488 00:23:47.912 }, 00:23:47.912 { 00:23:47.912 "name": "BaseBdev2", 00:23:47.912 "uuid": "0a1c0f93-6b16-50db-814d-cf694db9c611", 00:23:47.912 "is_configured": true, 00:23:47.912 "data_offset": 2048, 00:23:47.912 "data_size": 63488 00:23:47.912 } 00:23:47.912 ] 00:23:47.912 }' 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:47.912 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 75319 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 75319 ']' 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 75319 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75319 00:23:48.200 killing process with pid 75319 00:23:48.200 Received shutdown signal, test time was about 16.266070 seconds 00:23:48.200 00:23:48.200 Latency(us) 00:23:48.200 [2024-11-08T17:11:24.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.200 [2024-11-08T17:11:24.915Z] =================================================================================================================== 00:23:48.200 [2024-11-08T17:11:24.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75319' 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 75319 00:23:48.200 [2024-11-08 17:11:24.663770] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.200 17:11:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 75319 00:23:48.200 [2024-11-08 17:11:24.663920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.200 [2024-11-08 17:11:24.663986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.200 [2024-11-08 17:11:24.663997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:48.200 [2024-11-08 17:11:24.813481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.149 ************************************ 00:23:49.149 END TEST raid_rebuild_test_sb_io 00:23:49.149 ************************************ 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:23:49.149 00:23:49.149 real 0m18.753s 00:23:49.149 user 0m23.539s 00:23:49.149 sys 0m1.582s 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:49.149 17:11:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:23:49.149 17:11:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:23:49.149 17:11:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:49.149 17:11:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:49.149 17:11:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.149 ************************************ 00:23:49.149 START TEST raid_rebuild_test 00:23:49.149 ************************************ 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:49.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75999 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75999 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75999 ']' 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.149 17:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:49.149 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:49.149 Zero copy mechanism will not be used. 00:23:49.149 [2024-11-08 17:11:25.739075] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:23:49.150 [2024-11-08 17:11:25.739214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75999 ] 00:23:49.408 [2024-11-08 17:11:25.897485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.408 [2024-11-08 17:11:26.016309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.670 [2024-11-08 17:11:26.163457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.670 [2024-11-08 17:11:26.163505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.929 BaseBdev1_malloc 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.929 [2024-11-08 17:11:26.632711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:49.929 [2024-11-08 17:11:26.632786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.929 [2024-11-08 17:11:26.632807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:49.929 [2024-11-08 17:11:26.632819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.929 [2024-11-08 17:11:26.635069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.929 [2024-11-08 17:11:26.635105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:49.929 BaseBdev1 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.929 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.188 BaseBdev2_malloc 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.188 [2024-11-08 17:11:26.674561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:50.188 [2024-11-08 17:11:26.674614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.188 [2024-11-08 17:11:26.674630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:50.188 [2024-11-08 17:11:26.674643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.188 [2024-11-08 17:11:26.676798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.188 [2024-11-08 17:11:26.676831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:50.188 BaseBdev2 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.188 BaseBdev3_malloc 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.188 [2024-11-08 17:11:26.734138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:50.188 [2024-11-08 17:11:26.734308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.188 [2024-11-08 17:11:26.734335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:50.188 [2024-11-08 17:11:26.734347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.188 [2024-11-08 17:11:26.736566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.188 [2024-11-08 17:11:26.736605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:50.188 BaseBdev3 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.188 BaseBdev4_malloc 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.188 [2024-11-08 17:11:26.772136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:50.188 [2024-11-08 17:11:26.772187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.188 [2024-11-08 17:11:26.772207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:50.188 [2024-11-08 17:11:26.772218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.188 [2024-11-08 17:11:26.774452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.188 [2024-11-08 17:11:26.774490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:50.188 BaseBdev4 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.188 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.189 spare_malloc 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.189 spare_delay 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.189 [2024-11-08 17:11:26.818148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:50.189 [2024-11-08 17:11:26.818202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.189 [2024-11-08 17:11:26.818221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:50.189 [2024-11-08 17:11:26.818231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.189 [2024-11-08 17:11:26.820422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.189 [2024-11-08 17:11:26.820552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:50.189 spare 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.189 [2024-11-08 17:11:26.826210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:50.189 [2024-11-08 17:11:26.828175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:50.189 [2024-11-08 17:11:26.828243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:50.189 [2024-11-08 17:11:26.828297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:50.189 [2024-11-08 17:11:26.828385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:50.189 [2024-11-08 17:11:26.828398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:50.189 [2024-11-08 17:11:26.828671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:50.189 [2024-11-08 17:11:26.828844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:50.189 [2024-11-08 17:11:26.828856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:50.189 [2024-11-08 17:11:26.828998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.189 "name": "raid_bdev1", 00:23:50.189 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:23:50.189 "strip_size_kb": 0, 00:23:50.189 "state": "online", 00:23:50.189 "raid_level": "raid1", 00:23:50.189 "superblock": false, 00:23:50.189 "num_base_bdevs": 4, 00:23:50.189 "num_base_bdevs_discovered": 4, 00:23:50.189 "num_base_bdevs_operational": 4, 00:23:50.189 "base_bdevs_list": [ 00:23:50.189 { 00:23:50.189 "name": "BaseBdev1", 00:23:50.189 "uuid": "6886dd4f-2dca-59da-8f34-c5ce43ef07da", 00:23:50.189 "is_configured": true, 00:23:50.189 "data_offset": 0, 00:23:50.189 "data_size": 65536 00:23:50.189 }, 00:23:50.189 { 00:23:50.189 "name": "BaseBdev2", 00:23:50.189 "uuid": "ea1192dc-d891-52f0-8498-0511e8a28df1", 00:23:50.189 "is_configured": true, 00:23:50.189 "data_offset": 0, 00:23:50.189 "data_size": 65536 00:23:50.189 }, 00:23:50.189 { 00:23:50.189 "name": "BaseBdev3", 00:23:50.189 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:23:50.189 "is_configured": true, 00:23:50.189 "data_offset": 0, 00:23:50.189 "data_size": 65536 00:23:50.189 }, 00:23:50.189 { 00:23:50.189 "name": "BaseBdev4", 00:23:50.189 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:23:50.189 "is_configured": true, 00:23:50.189 "data_offset": 0, 00:23:50.189 "data_size": 65536 00:23:50.189 } 00:23:50.189 ] 00:23:50.189 }' 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.189 17:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:50.755 [2024-11-08 17:11:27.186680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.755 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:50.755 [2024-11-08 17:11:27.434405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:50.755 /dev/nbd0 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.013 1+0 records in 00:23:51.013 1+0 records out 00:23:51.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203896 s, 20.1 MB/s 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:51.013 17:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:03.220 65536+0 records in 00:24:03.220 65536+0 records out 00:24:03.220 33554432 bytes (34 MB, 32 MiB) copied, 11.5714 s, 2.9 MB/s 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:03.220 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:03.220 [2024-11-08 17:11:39.271284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.221 [2024-11-08 17:11:39.279386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.221 "name": "raid_bdev1", 00:24:03.221 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:03.221 "strip_size_kb": 0, 00:24:03.221 "state": "online", 00:24:03.221 "raid_level": "raid1", 00:24:03.221 "superblock": false, 00:24:03.221 "num_base_bdevs": 4, 00:24:03.221 "num_base_bdevs_discovered": 3, 00:24:03.221 "num_base_bdevs_operational": 3, 00:24:03.221 "base_bdevs_list": [ 00:24:03.221 { 00:24:03.221 "name": null, 00:24:03.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.221 "is_configured": false, 00:24:03.221 "data_offset": 0, 00:24:03.221 "data_size": 65536 00:24:03.221 }, 00:24:03.221 { 00:24:03.221 "name": "BaseBdev2", 00:24:03.221 "uuid": "ea1192dc-d891-52f0-8498-0511e8a28df1", 00:24:03.221 "is_configured": true, 00:24:03.221 "data_offset": 0, 00:24:03.221 "data_size": 65536 00:24:03.221 }, 00:24:03.221 { 00:24:03.221 "name": "BaseBdev3", 00:24:03.221 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:03.221 "is_configured": true, 00:24:03.221 "data_offset": 0, 00:24:03.221 "data_size": 65536 00:24:03.221 }, 00:24:03.221 { 00:24:03.221 "name": "BaseBdev4", 00:24:03.221 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:03.221 "is_configured": true, 00:24:03.221 "data_offset": 0, 00:24:03.221 "data_size": 65536 00:24:03.221 } 00:24:03.221 ] 00:24:03.221 }' 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.221 [2024-11-08 17:11:39.619464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:03.221 [2024-11-08 17:11:39.630737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.221 17:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:03.221 [2024-11-08 17:11:39.632942] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.171 "name": "raid_bdev1", 00:24:04.171 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:04.171 "strip_size_kb": 0, 00:24:04.171 "state": "online", 00:24:04.171 "raid_level": "raid1", 00:24:04.171 "superblock": false, 00:24:04.171 "num_base_bdevs": 4, 00:24:04.171 "num_base_bdevs_discovered": 4, 00:24:04.171 "num_base_bdevs_operational": 4, 00:24:04.171 "process": { 00:24:04.171 "type": "rebuild", 00:24:04.171 "target": "spare", 00:24:04.171 "progress": { 00:24:04.171 "blocks": 20480, 00:24:04.171 "percent": 31 00:24:04.171 } 00:24:04.171 }, 00:24:04.171 "base_bdevs_list": [ 00:24:04.171 { 00:24:04.171 "name": "spare", 00:24:04.171 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:04.171 "is_configured": true, 00:24:04.171 "data_offset": 0, 00:24:04.171 "data_size": 65536 00:24:04.171 }, 00:24:04.171 { 00:24:04.171 "name": "BaseBdev2", 00:24:04.171 "uuid": "ea1192dc-d891-52f0-8498-0511e8a28df1", 00:24:04.171 "is_configured": true, 00:24:04.171 "data_offset": 0, 00:24:04.171 "data_size": 65536 00:24:04.171 }, 00:24:04.171 { 00:24:04.171 "name": "BaseBdev3", 00:24:04.171 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:04.171 "is_configured": true, 00:24:04.171 "data_offset": 0, 00:24:04.171 "data_size": 65536 00:24:04.171 }, 00:24:04.171 { 00:24:04.171 "name": "BaseBdev4", 00:24:04.171 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:04.171 "is_configured": true, 00:24:04.171 "data_offset": 0, 00:24:04.171 "data_size": 65536 00:24:04.171 } 00:24:04.171 ] 00:24:04.171 }' 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.171 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.172 [2024-11-08 17:11:40.738530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:04.172 [2024-11-08 17:11:40.740243] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:04.172 [2024-11-08 17:11:40.740314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.172 [2024-11-08 17:11:40.740333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:04.172 [2024-11-08 17:11:40.740344] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.172 "name": "raid_bdev1", 00:24:04.172 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:04.172 "strip_size_kb": 0, 00:24:04.172 "state": "online", 00:24:04.172 "raid_level": "raid1", 00:24:04.172 "superblock": false, 00:24:04.172 "num_base_bdevs": 4, 00:24:04.172 "num_base_bdevs_discovered": 3, 00:24:04.172 "num_base_bdevs_operational": 3, 00:24:04.172 "base_bdevs_list": [ 00:24:04.172 { 00:24:04.172 "name": null, 00:24:04.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.172 "is_configured": false, 00:24:04.172 "data_offset": 0, 00:24:04.172 "data_size": 65536 00:24:04.172 }, 00:24:04.172 { 00:24:04.172 "name": "BaseBdev2", 00:24:04.172 "uuid": "ea1192dc-d891-52f0-8498-0511e8a28df1", 00:24:04.172 "is_configured": true, 00:24:04.172 "data_offset": 0, 00:24:04.172 "data_size": 65536 00:24:04.172 }, 00:24:04.172 { 00:24:04.172 "name": "BaseBdev3", 00:24:04.172 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:04.172 "is_configured": true, 00:24:04.172 "data_offset": 0, 00:24:04.172 "data_size": 65536 00:24:04.172 }, 00:24:04.172 { 00:24:04.172 "name": "BaseBdev4", 00:24:04.172 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:04.172 "is_configured": true, 00:24:04.172 "data_offset": 0, 00:24:04.172 "data_size": 65536 00:24:04.172 } 00:24:04.172 ] 00:24:04.172 }' 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.172 17:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.431 17:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.690 "name": "raid_bdev1", 00:24:04.690 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:04.690 "strip_size_kb": 0, 00:24:04.690 "state": "online", 00:24:04.690 "raid_level": "raid1", 00:24:04.690 "superblock": false, 00:24:04.690 "num_base_bdevs": 4, 00:24:04.690 "num_base_bdevs_discovered": 3, 00:24:04.690 "num_base_bdevs_operational": 3, 00:24:04.690 "base_bdevs_list": [ 00:24:04.690 { 00:24:04.690 "name": null, 00:24:04.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.690 "is_configured": false, 00:24:04.690 "data_offset": 0, 00:24:04.690 "data_size": 65536 00:24:04.690 }, 00:24:04.690 { 00:24:04.690 "name": "BaseBdev2", 00:24:04.690 "uuid": "ea1192dc-d891-52f0-8498-0511e8a28df1", 00:24:04.690 "is_configured": true, 00:24:04.690 "data_offset": 0, 00:24:04.690 "data_size": 65536 00:24:04.690 }, 00:24:04.690 { 00:24:04.690 "name": "BaseBdev3", 00:24:04.690 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:04.690 "is_configured": true, 00:24:04.690 "data_offset": 0, 00:24:04.690 "data_size": 65536 00:24:04.690 }, 00:24:04.690 { 00:24:04.690 "name": "BaseBdev4", 00:24:04.690 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:04.690 "is_configured": true, 00:24:04.690 "data_offset": 0, 00:24:04.690 "data_size": 65536 00:24:04.690 } 00:24:04.690 ] 00:24:04.690 }' 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.690 [2024-11-08 17:11:41.235442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.690 [2024-11-08 17:11:41.245929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.690 17:11:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:04.690 [2024-11-08 17:11:41.248175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.622 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.622 "name": "raid_bdev1", 00:24:05.622 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:05.622 "strip_size_kb": 0, 00:24:05.622 "state": "online", 00:24:05.622 "raid_level": "raid1", 00:24:05.622 "superblock": false, 00:24:05.622 "num_base_bdevs": 4, 00:24:05.622 "num_base_bdevs_discovered": 4, 00:24:05.622 "num_base_bdevs_operational": 4, 00:24:05.622 "process": { 00:24:05.622 "type": "rebuild", 00:24:05.622 "target": "spare", 00:24:05.622 "progress": { 00:24:05.622 "blocks": 20480, 00:24:05.622 "percent": 31 00:24:05.622 } 00:24:05.622 }, 00:24:05.622 "base_bdevs_list": [ 00:24:05.622 { 00:24:05.622 "name": "spare", 00:24:05.622 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:05.622 "is_configured": true, 00:24:05.622 "data_offset": 0, 00:24:05.622 "data_size": 65536 00:24:05.622 }, 00:24:05.622 { 00:24:05.622 "name": "BaseBdev2", 00:24:05.622 "uuid": "ea1192dc-d891-52f0-8498-0511e8a28df1", 00:24:05.622 "is_configured": true, 00:24:05.622 "data_offset": 0, 00:24:05.622 "data_size": 65536 00:24:05.622 }, 00:24:05.622 { 00:24:05.622 "name": "BaseBdev3", 00:24:05.622 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:05.622 "is_configured": true, 00:24:05.622 "data_offset": 0, 00:24:05.622 "data_size": 65536 00:24:05.622 }, 00:24:05.622 { 00:24:05.623 "name": "BaseBdev4", 00:24:05.623 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:05.623 "is_configured": true, 00:24:05.623 "data_offset": 0, 00:24:05.623 "data_size": 65536 00:24:05.623 } 00:24:05.623 ] 00:24:05.623 }' 00:24:05.623 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.623 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.623 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.880 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.880 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:05.880 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:05.880 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:05.880 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.881 [2024-11-08 17:11:42.369666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:05.881 [2024-11-08 17:11:42.455978] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.881 "name": "raid_bdev1", 00:24:05.881 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:05.881 "strip_size_kb": 0, 00:24:05.881 "state": "online", 00:24:05.881 "raid_level": "raid1", 00:24:05.881 "superblock": false, 00:24:05.881 "num_base_bdevs": 4, 00:24:05.881 "num_base_bdevs_discovered": 3, 00:24:05.881 "num_base_bdevs_operational": 3, 00:24:05.881 "process": { 00:24:05.881 "type": "rebuild", 00:24:05.881 "target": "spare", 00:24:05.881 "progress": { 00:24:05.881 "blocks": 24576, 00:24:05.881 "percent": 37 00:24:05.881 } 00:24:05.881 }, 00:24:05.881 "base_bdevs_list": [ 00:24:05.881 { 00:24:05.881 "name": "spare", 00:24:05.881 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:05.881 "is_configured": true, 00:24:05.881 "data_offset": 0, 00:24:05.881 "data_size": 65536 00:24:05.881 }, 00:24:05.881 { 00:24:05.881 "name": null, 00:24:05.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.881 "is_configured": false, 00:24:05.881 "data_offset": 0, 00:24:05.881 "data_size": 65536 00:24:05.881 }, 00:24:05.881 { 00:24:05.881 "name": "BaseBdev3", 00:24:05.881 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:05.881 "is_configured": true, 00:24:05.881 "data_offset": 0, 00:24:05.881 "data_size": 65536 00:24:05.881 }, 00:24:05.881 { 00:24:05.881 "name": "BaseBdev4", 00:24:05.881 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:05.881 "is_configured": true, 00:24:05.881 "data_offset": 0, 00:24:05.881 "data_size": 65536 00:24:05.881 } 00:24:05.881 ] 00:24:05.881 }' 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=390 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.881 17:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.139 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.139 "name": "raid_bdev1", 00:24:06.139 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:06.139 "strip_size_kb": 0, 00:24:06.139 "state": "online", 00:24:06.139 "raid_level": "raid1", 00:24:06.139 "superblock": false, 00:24:06.139 "num_base_bdevs": 4, 00:24:06.139 "num_base_bdevs_discovered": 3, 00:24:06.139 "num_base_bdevs_operational": 3, 00:24:06.139 "process": { 00:24:06.139 "type": "rebuild", 00:24:06.139 "target": "spare", 00:24:06.139 "progress": { 00:24:06.139 "blocks": 26624, 00:24:06.139 "percent": 40 00:24:06.139 } 00:24:06.139 }, 00:24:06.139 "base_bdevs_list": [ 00:24:06.139 { 00:24:06.139 "name": "spare", 00:24:06.139 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:06.139 "is_configured": true, 00:24:06.139 "data_offset": 0, 00:24:06.139 "data_size": 65536 00:24:06.139 }, 00:24:06.139 { 00:24:06.139 "name": null, 00:24:06.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.139 "is_configured": false, 00:24:06.139 "data_offset": 0, 00:24:06.139 "data_size": 65536 00:24:06.139 }, 00:24:06.139 { 00:24:06.139 "name": "BaseBdev3", 00:24:06.139 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:06.139 "is_configured": true, 00:24:06.139 "data_offset": 0, 00:24:06.139 "data_size": 65536 00:24:06.139 }, 00:24:06.139 { 00:24:06.139 "name": "BaseBdev4", 00:24:06.139 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:06.139 "is_configured": true, 00:24:06.139 "data_offset": 0, 00:24:06.139 "data_size": 65536 00:24:06.139 } 00:24:06.139 ] 00:24:06.139 }' 00:24:06.139 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.139 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.139 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:06.139 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.139 17:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:07.071 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:07.071 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.072 "name": "raid_bdev1", 00:24:07.072 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:07.072 "strip_size_kb": 0, 00:24:07.072 "state": "online", 00:24:07.072 "raid_level": "raid1", 00:24:07.072 "superblock": false, 00:24:07.072 "num_base_bdevs": 4, 00:24:07.072 "num_base_bdevs_discovered": 3, 00:24:07.072 "num_base_bdevs_operational": 3, 00:24:07.072 "process": { 00:24:07.072 "type": "rebuild", 00:24:07.072 "target": "spare", 00:24:07.072 "progress": { 00:24:07.072 "blocks": 49152, 00:24:07.072 "percent": 75 00:24:07.072 } 00:24:07.072 }, 00:24:07.072 "base_bdevs_list": [ 00:24:07.072 { 00:24:07.072 "name": "spare", 00:24:07.072 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:07.072 "is_configured": true, 00:24:07.072 "data_offset": 0, 00:24:07.072 "data_size": 65536 00:24:07.072 }, 00:24:07.072 { 00:24:07.072 "name": null, 00:24:07.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.072 "is_configured": false, 00:24:07.072 "data_offset": 0, 00:24:07.072 "data_size": 65536 00:24:07.072 }, 00:24:07.072 { 00:24:07.072 "name": "BaseBdev3", 00:24:07.072 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:07.072 "is_configured": true, 00:24:07.072 "data_offset": 0, 00:24:07.072 "data_size": 65536 00:24:07.072 }, 00:24:07.072 { 00:24:07.072 "name": "BaseBdev4", 00:24:07.072 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:07.072 "is_configured": true, 00:24:07.072 "data_offset": 0, 00:24:07.072 "data_size": 65536 00:24:07.072 } 00:24:07.072 ] 00:24:07.072 }' 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.072 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.329 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.329 17:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:07.894 [2024-11-08 17:11:44.468923] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:07.894 [2024-11-08 17:11:44.469159] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:07.894 [2024-11-08 17:11:44.469223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.152 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.153 17:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.153 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.153 "name": "raid_bdev1", 00:24:08.153 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:08.153 "strip_size_kb": 0, 00:24:08.153 "state": "online", 00:24:08.153 "raid_level": "raid1", 00:24:08.153 "superblock": false, 00:24:08.153 "num_base_bdevs": 4, 00:24:08.153 "num_base_bdevs_discovered": 3, 00:24:08.153 "num_base_bdevs_operational": 3, 00:24:08.153 "base_bdevs_list": [ 00:24:08.153 { 00:24:08.153 "name": "spare", 00:24:08.153 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:08.153 "is_configured": true, 00:24:08.153 "data_offset": 0, 00:24:08.153 "data_size": 65536 00:24:08.153 }, 00:24:08.153 { 00:24:08.153 "name": null, 00:24:08.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.153 "is_configured": false, 00:24:08.153 "data_offset": 0, 00:24:08.153 "data_size": 65536 00:24:08.153 }, 00:24:08.153 { 00:24:08.153 "name": "BaseBdev3", 00:24:08.153 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:08.153 "is_configured": true, 00:24:08.153 "data_offset": 0, 00:24:08.153 "data_size": 65536 00:24:08.153 }, 00:24:08.153 { 00:24:08.153 "name": "BaseBdev4", 00:24:08.153 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:08.153 "is_configured": true, 00:24:08.153 "data_offset": 0, 00:24:08.153 "data_size": 65536 00:24:08.153 } 00:24:08.153 ] 00:24:08.153 }' 00:24:08.153 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.413 "name": "raid_bdev1", 00:24:08.413 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:08.413 "strip_size_kb": 0, 00:24:08.413 "state": "online", 00:24:08.413 "raid_level": "raid1", 00:24:08.413 "superblock": false, 00:24:08.413 "num_base_bdevs": 4, 00:24:08.413 "num_base_bdevs_discovered": 3, 00:24:08.413 "num_base_bdevs_operational": 3, 00:24:08.413 "base_bdevs_list": [ 00:24:08.413 { 00:24:08.413 "name": "spare", 00:24:08.413 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:08.413 "is_configured": true, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 }, 00:24:08.413 { 00:24:08.413 "name": null, 00:24:08.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.413 "is_configured": false, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 }, 00:24:08.413 { 00:24:08.413 "name": "BaseBdev3", 00:24:08.413 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:08.413 "is_configured": true, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 }, 00:24:08.413 { 00:24:08.413 "name": "BaseBdev4", 00:24:08.413 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:08.413 "is_configured": true, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 } 00:24:08.413 ] 00:24:08.413 }' 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:08.413 17:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.413 "name": "raid_bdev1", 00:24:08.413 "uuid": "014834d4-50e5-47df-977c-30e5a723080a", 00:24:08.413 "strip_size_kb": 0, 00:24:08.413 "state": "online", 00:24:08.413 "raid_level": "raid1", 00:24:08.413 "superblock": false, 00:24:08.413 "num_base_bdevs": 4, 00:24:08.413 "num_base_bdevs_discovered": 3, 00:24:08.413 "num_base_bdevs_operational": 3, 00:24:08.413 "base_bdevs_list": [ 00:24:08.413 { 00:24:08.413 "name": "spare", 00:24:08.413 "uuid": "c8b8f56f-fdbf-5132-bfcd-31f2ed575a8e", 00:24:08.413 "is_configured": true, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 }, 00:24:08.413 { 00:24:08.413 "name": null, 00:24:08.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.413 "is_configured": false, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 }, 00:24:08.413 { 00:24:08.413 "name": "BaseBdev3", 00:24:08.413 "uuid": "98c74db5-66bf-5d9d-ba30-72a9e360c4ff", 00:24:08.413 "is_configured": true, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 }, 00:24:08.413 { 00:24:08.413 "name": "BaseBdev4", 00:24:08.413 "uuid": "3ac0d33e-7899-52d8-a051-216a06fdbed4", 00:24:08.413 "is_configured": true, 00:24:08.413 "data_offset": 0, 00:24:08.413 "data_size": 65536 00:24:08.413 } 00:24:08.413 ] 00:24:08.413 }' 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.413 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.683 [2024-11-08 17:11:45.380336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:08.683 [2024-11-08 17:11:45.380502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:08.683 [2024-11-08 17:11:45.380608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.683 [2024-11-08 17:11:45.380716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:08.683 [2024-11-08 17:11:45.380728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.683 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:08.943 /dev/nbd0 00:24:08.943 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.201 1+0 records in 00:24:09.201 1+0 records out 00:24:09.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692101 s, 5.9 MB/s 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:09.201 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:09.459 /dev/nbd1 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.459 1+0 records in 00:24:09.459 1+0 records out 00:24:09.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462789 s, 8.9 MB/s 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:09.459 17:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:09.459 17:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:09.459 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:09.459 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:09.459 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:09.459 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:09.459 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:09.459 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:09.717 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:09.717 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:09.718 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75999 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75999 ']' 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75999 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75999 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:09.976 killing process with pid 75999 00:24:09.976 Received shutdown signal, test time was about 60.000000 seconds 00:24:09.976 00:24:09.976 Latency(us) 00:24:09.976 [2024-11-08T17:11:46.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.976 [2024-11-08T17:11:46.691Z] =================================================================================================================== 00:24:09.976 [2024-11-08T17:11:46.691Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75999' 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75999 00:24:09.976 [2024-11-08 17:11:46.609676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:09.976 17:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75999 00:24:10.234 [2024-11-08 17:11:46.933746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:11.188 ************************************ 00:24:11.188 END TEST raid_rebuild_test 00:24:11.188 ************************************ 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:24:11.188 00:24:11.188 real 0m22.018s 00:24:11.188 user 0m22.223s 00:24:11.188 sys 0m4.628s 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.188 17:11:47 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:24:11.188 17:11:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:11.188 17:11:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:11.188 17:11:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:11.188 ************************************ 00:24:11.188 START TEST raid_rebuild_test_sb 00:24:11.188 ************************************ 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:11.188 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76495 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76495 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 76495 ']' 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:11.189 17:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.189 [2024-11-08 17:11:47.819026] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:24:11.189 [2024-11-08 17:11:47.819347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76495 ] 00:24:11.189 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:11.189 Zero copy mechanism will not be used. 00:24:11.459 [2024-11-08 17:11:47.978826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.459 [2024-11-08 17:11:48.100806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.716 [2024-11-08 17:11:48.281789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.716 [2024-11-08 17:11:48.282033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.973 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:11.973 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:24:11.973 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:11.973 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:11.973 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.973 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.232 BaseBdev1_malloc 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.232 [2024-11-08 17:11:48.704285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:12.232 [2024-11-08 17:11:48.704520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.232 [2024-11-08 17:11:48.704554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:12.232 [2024-11-08 17:11:48.704567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.232 [2024-11-08 17:11:48.706881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.232 [2024-11-08 17:11:48.706918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:12.232 BaseBdev1 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:12.232 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 BaseBdev2_malloc 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 [2024-11-08 17:11:48.747107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:12.233 [2024-11-08 17:11:48.747333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.233 [2024-11-08 17:11:48.747359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:12.233 [2024-11-08 17:11:48.747373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.233 [2024-11-08 17:11:48.749863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.233 [2024-11-08 17:11:48.749918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:12.233 BaseBdev2 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 BaseBdev3_malloc 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 [2024-11-08 17:11:48.802925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:12.233 [2024-11-08 17:11:48.803155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.233 [2024-11-08 17:11:48.803203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:12.233 [2024-11-08 17:11:48.803353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.233 [2024-11-08 17:11:48.805707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.233 [2024-11-08 17:11:48.805838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.233 BaseBdev3 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 BaseBdev4_malloc 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 [2024-11-08 17:11:48.845686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:12.233 [2024-11-08 17:11:48.845898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.233 [2024-11-08 17:11:48.845945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:12.233 [2024-11-08 17:11:48.846426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.233 [2024-11-08 17:11:48.848811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.233 [2024-11-08 17:11:48.848922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:12.233 BaseBdev4 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 spare_malloc 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 spare_delay 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 [2024-11-08 17:11:48.895871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:12.233 [2024-11-08 17:11:48.896036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.233 [2024-11-08 17:11:48.896108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:12.233 [2024-11-08 17:11:48.896160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.233 [2024-11-08 17:11:48.898405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.233 [2024-11-08 17:11:48.898518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:12.233 spare 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 [2024-11-08 17:11:48.903939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:12.233 [2024-11-08 17:11:48.905896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:12.233 [2024-11-08 17:11:48.905964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:12.233 [2024-11-08 17:11:48.906016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:12.233 [2024-11-08 17:11:48.906200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:12.233 [2024-11-08 17:11:48.906216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:12.233 [2024-11-08 17:11:48.906472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:12.233 [2024-11-08 17:11:48.906636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:12.233 [2024-11-08 17:11:48.906645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:12.233 [2024-11-08 17:11:48.906814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.233 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.233 "name": "raid_bdev1", 00:24:12.233 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:12.233 "strip_size_kb": 0, 00:24:12.233 "state": "online", 00:24:12.233 "raid_level": "raid1", 00:24:12.233 "superblock": true, 00:24:12.233 "num_base_bdevs": 4, 00:24:12.233 "num_base_bdevs_discovered": 4, 00:24:12.233 "num_base_bdevs_operational": 4, 00:24:12.233 "base_bdevs_list": [ 00:24:12.233 { 00:24:12.233 "name": "BaseBdev1", 00:24:12.233 "uuid": "9ed34b74-524a-5efa-b57f-e50de18bc775", 00:24:12.233 "is_configured": true, 00:24:12.233 "data_offset": 2048, 00:24:12.233 "data_size": 63488 00:24:12.233 }, 00:24:12.233 { 00:24:12.233 "name": "BaseBdev2", 00:24:12.233 "uuid": "77977982-a89b-5406-bdbe-ea8f75b87179", 00:24:12.233 "is_configured": true, 00:24:12.233 "data_offset": 2048, 00:24:12.233 "data_size": 63488 00:24:12.233 }, 00:24:12.233 { 00:24:12.233 "name": "BaseBdev3", 00:24:12.233 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:12.233 "is_configured": true, 00:24:12.233 "data_offset": 2048, 00:24:12.233 "data_size": 63488 00:24:12.233 }, 00:24:12.233 { 00:24:12.233 "name": "BaseBdev4", 00:24:12.233 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:12.233 "is_configured": true, 00:24:12.233 "data_offset": 2048, 00:24:12.233 "data_size": 63488 00:24:12.233 } 00:24:12.233 ] 00:24:12.234 }' 00:24:12.234 17:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.234 17:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:12.801 [2024-11-08 17:11:49.244398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:12.801 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.802 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:12.802 [2024-11-08 17:11:49.488146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:12.802 /dev/nbd0 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:13.063 1+0 records in 00:24:13.063 1+0 records out 00:24:13.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578663 s, 7.1 MB/s 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:13.063 17:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:13.064 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:13.064 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:13.064 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:13.064 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:13.064 17:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:23.046 63488+0 records in 00:24:23.046 63488+0 records out 00:24:23.046 32505856 bytes (33 MB, 31 MiB) copied, 8.86929 s, 3.7 MB/s 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:23.046 [2024-11-08 17:11:58.612823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.046 [2024-11-08 17:11:58.641130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:23.046 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.047 "name": "raid_bdev1", 00:24:23.047 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:23.047 "strip_size_kb": 0, 00:24:23.047 "state": "online", 00:24:23.047 "raid_level": "raid1", 00:24:23.047 "superblock": true, 00:24:23.047 "num_base_bdevs": 4, 00:24:23.047 "num_base_bdevs_discovered": 3, 00:24:23.047 "num_base_bdevs_operational": 3, 00:24:23.047 "base_bdevs_list": [ 00:24:23.047 { 00:24:23.047 "name": null, 00:24:23.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.047 "is_configured": false, 00:24:23.047 "data_offset": 0, 00:24:23.047 "data_size": 63488 00:24:23.047 }, 00:24:23.047 { 00:24:23.047 "name": "BaseBdev2", 00:24:23.047 "uuid": "77977982-a89b-5406-bdbe-ea8f75b87179", 00:24:23.047 "is_configured": true, 00:24:23.047 "data_offset": 2048, 00:24:23.047 "data_size": 63488 00:24:23.047 }, 00:24:23.047 { 00:24:23.047 "name": "BaseBdev3", 00:24:23.047 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:23.047 "is_configured": true, 00:24:23.047 "data_offset": 2048, 00:24:23.047 "data_size": 63488 00:24:23.047 }, 00:24:23.047 { 00:24:23.047 "name": "BaseBdev4", 00:24:23.047 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:23.047 "is_configured": true, 00:24:23.047 "data_offset": 2048, 00:24:23.047 "data_size": 63488 00:24:23.047 } 00:24:23.047 ] 00:24:23.047 }' 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.047 [2024-11-08 17:11:58.985214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.047 [2024-11-08 17:11:58.995864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.047 17:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:23.047 [2024-11-08 17:11:58.998067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.304 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:23.563 "name": "raid_bdev1", 00:24:23.563 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:23.563 "strip_size_kb": 0, 00:24:23.563 "state": "online", 00:24:23.563 "raid_level": "raid1", 00:24:23.563 "superblock": true, 00:24:23.563 "num_base_bdevs": 4, 00:24:23.563 "num_base_bdevs_discovered": 4, 00:24:23.563 "num_base_bdevs_operational": 4, 00:24:23.563 "process": { 00:24:23.563 "type": "rebuild", 00:24:23.563 "target": "spare", 00:24:23.563 "progress": { 00:24:23.563 "blocks": 20480, 00:24:23.563 "percent": 32 00:24:23.563 } 00:24:23.563 }, 00:24:23.563 "base_bdevs_list": [ 00:24:23.563 { 00:24:23.563 "name": "spare", 00:24:23.563 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:23.563 "is_configured": true, 00:24:23.563 "data_offset": 2048, 00:24:23.563 "data_size": 63488 00:24:23.563 }, 00:24:23.563 { 00:24:23.563 "name": "BaseBdev2", 00:24:23.563 "uuid": "77977982-a89b-5406-bdbe-ea8f75b87179", 00:24:23.563 "is_configured": true, 00:24:23.563 "data_offset": 2048, 00:24:23.563 "data_size": 63488 00:24:23.563 }, 00:24:23.563 { 00:24:23.563 "name": "BaseBdev3", 00:24:23.563 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:23.563 "is_configured": true, 00:24:23.563 "data_offset": 2048, 00:24:23.563 "data_size": 63488 00:24:23.563 }, 00:24:23.563 { 00:24:23.563 "name": "BaseBdev4", 00:24:23.563 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:23.563 "is_configured": true, 00:24:23.563 "data_offset": 2048, 00:24:23.563 "data_size": 63488 00:24:23.563 } 00:24:23.563 ] 00:24:23.563 }' 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.563 [2024-11-08 17:12:00.108046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:23.563 [2024-11-08 17:12:00.205818] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:23.563 [2024-11-08 17:12:00.206082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.563 [2024-11-08 17:12:00.206220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:23.563 [2024-11-08 17:12:00.206261] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.563 "name": "raid_bdev1", 00:24:23.563 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:23.563 "strip_size_kb": 0, 00:24:23.563 "state": "online", 00:24:23.563 "raid_level": "raid1", 00:24:23.563 "superblock": true, 00:24:23.563 "num_base_bdevs": 4, 00:24:23.563 "num_base_bdevs_discovered": 3, 00:24:23.563 "num_base_bdevs_operational": 3, 00:24:23.563 "base_bdevs_list": [ 00:24:23.563 { 00:24:23.563 "name": null, 00:24:23.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.563 "is_configured": false, 00:24:23.563 "data_offset": 0, 00:24:23.563 "data_size": 63488 00:24:23.563 }, 00:24:23.563 { 00:24:23.563 "name": "BaseBdev2", 00:24:23.563 "uuid": "77977982-a89b-5406-bdbe-ea8f75b87179", 00:24:23.563 "is_configured": true, 00:24:23.563 "data_offset": 2048, 00:24:23.563 "data_size": 63488 00:24:23.563 }, 00:24:23.563 { 00:24:23.563 "name": "BaseBdev3", 00:24:23.563 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:23.563 "is_configured": true, 00:24:23.563 "data_offset": 2048, 00:24:23.563 "data_size": 63488 00:24:23.563 }, 00:24:23.563 { 00:24:23.563 "name": "BaseBdev4", 00:24:23.563 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:23.563 "is_configured": true, 00:24:23.563 "data_offset": 2048, 00:24:23.563 "data_size": 63488 00:24:23.563 } 00:24:23.563 ] 00:24:23.563 }' 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.563 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.821 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:23.821 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.821 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:23.821 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:23.821 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:24.078 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.078 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.078 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.078 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.078 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.078 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:24.078 "name": "raid_bdev1", 00:24:24.078 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:24.078 "strip_size_kb": 0, 00:24:24.078 "state": "online", 00:24:24.078 "raid_level": "raid1", 00:24:24.078 "superblock": true, 00:24:24.078 "num_base_bdevs": 4, 00:24:24.078 "num_base_bdevs_discovered": 3, 00:24:24.078 "num_base_bdevs_operational": 3, 00:24:24.078 "base_bdevs_list": [ 00:24:24.078 { 00:24:24.078 "name": null, 00:24:24.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.078 "is_configured": false, 00:24:24.078 "data_offset": 0, 00:24:24.078 "data_size": 63488 00:24:24.078 }, 00:24:24.079 { 00:24:24.079 "name": "BaseBdev2", 00:24:24.079 "uuid": "77977982-a89b-5406-bdbe-ea8f75b87179", 00:24:24.079 "is_configured": true, 00:24:24.079 "data_offset": 2048, 00:24:24.079 "data_size": 63488 00:24:24.079 }, 00:24:24.079 { 00:24:24.079 "name": "BaseBdev3", 00:24:24.079 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:24.079 "is_configured": true, 00:24:24.079 "data_offset": 2048, 00:24:24.079 "data_size": 63488 00:24:24.079 }, 00:24:24.079 { 00:24:24.079 "name": "BaseBdev4", 00:24:24.079 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:24.079 "is_configured": true, 00:24:24.079 "data_offset": 2048, 00:24:24.079 "data_size": 63488 00:24:24.079 } 00:24:24.079 ] 00:24:24.079 }' 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.079 [2024-11-08 17:12:00.629437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:24.079 [2024-11-08 17:12:00.639519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.079 17:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:24.079 [2024-11-08 17:12:00.641829] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.013 "name": "raid_bdev1", 00:24:25.013 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:25.013 "strip_size_kb": 0, 00:24:25.013 "state": "online", 00:24:25.013 "raid_level": "raid1", 00:24:25.013 "superblock": true, 00:24:25.013 "num_base_bdevs": 4, 00:24:25.013 "num_base_bdevs_discovered": 4, 00:24:25.013 "num_base_bdevs_operational": 4, 00:24:25.013 "process": { 00:24:25.013 "type": "rebuild", 00:24:25.013 "target": "spare", 00:24:25.013 "progress": { 00:24:25.013 "blocks": 20480, 00:24:25.013 "percent": 32 00:24:25.013 } 00:24:25.013 }, 00:24:25.013 "base_bdevs_list": [ 00:24:25.013 { 00:24:25.013 "name": "spare", 00:24:25.013 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:25.013 "is_configured": true, 00:24:25.013 "data_offset": 2048, 00:24:25.013 "data_size": 63488 00:24:25.013 }, 00:24:25.013 { 00:24:25.013 "name": "BaseBdev2", 00:24:25.013 "uuid": "77977982-a89b-5406-bdbe-ea8f75b87179", 00:24:25.013 "is_configured": true, 00:24:25.013 "data_offset": 2048, 00:24:25.013 "data_size": 63488 00:24:25.013 }, 00:24:25.013 { 00:24:25.013 "name": "BaseBdev3", 00:24:25.013 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:25.013 "is_configured": true, 00:24:25.013 "data_offset": 2048, 00:24:25.013 "data_size": 63488 00:24:25.013 }, 00:24:25.013 { 00:24:25.013 "name": "BaseBdev4", 00:24:25.013 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:25.013 "is_configured": true, 00:24:25.013 "data_offset": 2048, 00:24:25.013 "data_size": 63488 00:24:25.013 } 00:24:25.013 ] 00:24:25.013 }' 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.013 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:25.271 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.271 [2024-11-08 17:12:01.755771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:25.271 [2024-11-08 17:12:01.949571] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.271 17:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.528 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.528 "name": "raid_bdev1", 00:24:25.528 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:25.528 "strip_size_kb": 0, 00:24:25.528 "state": "online", 00:24:25.528 "raid_level": "raid1", 00:24:25.528 "superblock": true, 00:24:25.528 "num_base_bdevs": 4, 00:24:25.528 "num_base_bdevs_discovered": 3, 00:24:25.528 "num_base_bdevs_operational": 3, 00:24:25.528 "process": { 00:24:25.528 "type": "rebuild", 00:24:25.528 "target": "spare", 00:24:25.528 "progress": { 00:24:25.528 "blocks": 24576, 00:24:25.528 "percent": 38 00:24:25.528 } 00:24:25.528 }, 00:24:25.528 "base_bdevs_list": [ 00:24:25.528 { 00:24:25.528 "name": "spare", 00:24:25.528 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:25.528 "is_configured": true, 00:24:25.528 "data_offset": 2048, 00:24:25.528 "data_size": 63488 00:24:25.528 }, 00:24:25.528 { 00:24:25.528 "name": null, 00:24:25.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.528 "is_configured": false, 00:24:25.528 "data_offset": 0, 00:24:25.528 "data_size": 63488 00:24:25.528 }, 00:24:25.528 { 00:24:25.528 "name": "BaseBdev3", 00:24:25.528 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:25.528 "is_configured": true, 00:24:25.528 "data_offset": 2048, 00:24:25.528 "data_size": 63488 00:24:25.528 }, 00:24:25.528 { 00:24:25.528 "name": "BaseBdev4", 00:24:25.528 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:25.528 "is_configured": true, 00:24:25.528 "data_offset": 2048, 00:24:25.528 "data_size": 63488 00:24:25.528 } 00:24:25.528 ] 00:24:25.528 }' 00:24:25.528 17:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.528 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.528 "name": "raid_bdev1", 00:24:25.528 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:25.528 "strip_size_kb": 0, 00:24:25.528 "state": "online", 00:24:25.528 "raid_level": "raid1", 00:24:25.528 "superblock": true, 00:24:25.528 "num_base_bdevs": 4, 00:24:25.528 "num_base_bdevs_discovered": 3, 00:24:25.528 "num_base_bdevs_operational": 3, 00:24:25.528 "process": { 00:24:25.528 "type": "rebuild", 00:24:25.528 "target": "spare", 00:24:25.528 "progress": { 00:24:25.528 "blocks": 26624, 00:24:25.528 "percent": 41 00:24:25.528 } 00:24:25.528 }, 00:24:25.528 "base_bdevs_list": [ 00:24:25.528 { 00:24:25.528 "name": "spare", 00:24:25.528 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:25.528 "is_configured": true, 00:24:25.528 "data_offset": 2048, 00:24:25.528 "data_size": 63488 00:24:25.528 }, 00:24:25.528 { 00:24:25.528 "name": null, 00:24:25.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.528 "is_configured": false, 00:24:25.529 "data_offset": 0, 00:24:25.529 "data_size": 63488 00:24:25.529 }, 00:24:25.529 { 00:24:25.529 "name": "BaseBdev3", 00:24:25.529 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:25.529 "is_configured": true, 00:24:25.529 "data_offset": 2048, 00:24:25.529 "data_size": 63488 00:24:25.529 }, 00:24:25.529 { 00:24:25.529 "name": "BaseBdev4", 00:24:25.529 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:25.529 "is_configured": true, 00:24:25.529 "data_offset": 2048, 00:24:25.529 "data_size": 63488 00:24:25.529 } 00:24:25.529 ] 00:24:25.529 }' 00:24:25.529 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.529 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.529 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.529 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.529 17:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.462 17:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.719 17:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.719 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:26.719 "name": "raid_bdev1", 00:24:26.719 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:26.719 "strip_size_kb": 0, 00:24:26.719 "state": "online", 00:24:26.719 "raid_level": "raid1", 00:24:26.719 "superblock": true, 00:24:26.719 "num_base_bdevs": 4, 00:24:26.719 "num_base_bdevs_discovered": 3, 00:24:26.719 "num_base_bdevs_operational": 3, 00:24:26.719 "process": { 00:24:26.719 "type": "rebuild", 00:24:26.719 "target": "spare", 00:24:26.719 "progress": { 00:24:26.719 "blocks": 49152, 00:24:26.719 "percent": 77 00:24:26.719 } 00:24:26.719 }, 00:24:26.719 "base_bdevs_list": [ 00:24:26.719 { 00:24:26.719 "name": "spare", 00:24:26.719 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:26.719 "is_configured": true, 00:24:26.719 "data_offset": 2048, 00:24:26.719 "data_size": 63488 00:24:26.719 }, 00:24:26.719 { 00:24:26.719 "name": null, 00:24:26.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.719 "is_configured": false, 00:24:26.719 "data_offset": 0, 00:24:26.719 "data_size": 63488 00:24:26.719 }, 00:24:26.719 { 00:24:26.720 "name": "BaseBdev3", 00:24:26.720 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:26.720 "is_configured": true, 00:24:26.720 "data_offset": 2048, 00:24:26.720 "data_size": 63488 00:24:26.720 }, 00:24:26.720 { 00:24:26.720 "name": "BaseBdev4", 00:24:26.720 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:26.720 "is_configured": true, 00:24:26.720 "data_offset": 2048, 00:24:26.720 "data_size": 63488 00:24:26.720 } 00:24:26.720 ] 00:24:26.720 }' 00:24:26.720 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:26.720 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:26.720 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:26.720 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:26.720 17:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:27.285 [2024-11-08 17:12:03.861518] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:27.285 [2024-11-08 17:12:03.861802] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:27.285 [2024-11-08 17:12:03.862019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:27.851 "name": "raid_bdev1", 00:24:27.851 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:27.851 "strip_size_kb": 0, 00:24:27.851 "state": "online", 00:24:27.851 "raid_level": "raid1", 00:24:27.851 "superblock": true, 00:24:27.851 "num_base_bdevs": 4, 00:24:27.851 "num_base_bdevs_discovered": 3, 00:24:27.851 "num_base_bdevs_operational": 3, 00:24:27.851 "base_bdevs_list": [ 00:24:27.851 { 00:24:27.851 "name": "spare", 00:24:27.851 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:27.851 "is_configured": true, 00:24:27.851 "data_offset": 2048, 00:24:27.851 "data_size": 63488 00:24:27.851 }, 00:24:27.851 { 00:24:27.851 "name": null, 00:24:27.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.851 "is_configured": false, 00:24:27.851 "data_offset": 0, 00:24:27.851 "data_size": 63488 00:24:27.851 }, 00:24:27.851 { 00:24:27.851 "name": "BaseBdev3", 00:24:27.851 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:27.851 "is_configured": true, 00:24:27.851 "data_offset": 2048, 00:24:27.851 "data_size": 63488 00:24:27.851 }, 00:24:27.851 { 00:24:27.851 "name": "BaseBdev4", 00:24:27.851 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:27.851 "is_configured": true, 00:24:27.851 "data_offset": 2048, 00:24:27.851 "data_size": 63488 00:24:27.851 } 00:24:27.851 ] 00:24:27.851 }' 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:27.851 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:27.852 "name": "raid_bdev1", 00:24:27.852 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:27.852 "strip_size_kb": 0, 00:24:27.852 "state": "online", 00:24:27.852 "raid_level": "raid1", 00:24:27.852 "superblock": true, 00:24:27.852 "num_base_bdevs": 4, 00:24:27.852 "num_base_bdevs_discovered": 3, 00:24:27.852 "num_base_bdevs_operational": 3, 00:24:27.852 "base_bdevs_list": [ 00:24:27.852 { 00:24:27.852 "name": "spare", 00:24:27.852 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:27.852 "is_configured": true, 00:24:27.852 "data_offset": 2048, 00:24:27.852 "data_size": 63488 00:24:27.852 }, 00:24:27.852 { 00:24:27.852 "name": null, 00:24:27.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.852 "is_configured": false, 00:24:27.852 "data_offset": 0, 00:24:27.852 "data_size": 63488 00:24:27.852 }, 00:24:27.852 { 00:24:27.852 "name": "BaseBdev3", 00:24:27.852 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:27.852 "is_configured": true, 00:24:27.852 "data_offset": 2048, 00:24:27.852 "data_size": 63488 00:24:27.852 }, 00:24:27.852 { 00:24:27.852 "name": "BaseBdev4", 00:24:27.852 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:27.852 "is_configured": true, 00:24:27.852 "data_offset": 2048, 00:24:27.852 "data_size": 63488 00:24:27.852 } 00:24:27.852 ] 00:24:27.852 }' 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.852 "name": "raid_bdev1", 00:24:27.852 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:27.852 "strip_size_kb": 0, 00:24:27.852 "state": "online", 00:24:27.852 "raid_level": "raid1", 00:24:27.852 "superblock": true, 00:24:27.852 "num_base_bdevs": 4, 00:24:27.852 "num_base_bdevs_discovered": 3, 00:24:27.852 "num_base_bdevs_operational": 3, 00:24:27.852 "base_bdevs_list": [ 00:24:27.852 { 00:24:27.852 "name": "spare", 00:24:27.852 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:27.852 "is_configured": true, 00:24:27.852 "data_offset": 2048, 00:24:27.852 "data_size": 63488 00:24:27.852 }, 00:24:27.852 { 00:24:27.852 "name": null, 00:24:27.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.852 "is_configured": false, 00:24:27.852 "data_offset": 0, 00:24:27.852 "data_size": 63488 00:24:27.852 }, 00:24:27.852 { 00:24:27.852 "name": "BaseBdev3", 00:24:27.852 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:27.852 "is_configured": true, 00:24:27.852 "data_offset": 2048, 00:24:27.852 "data_size": 63488 00:24:27.852 }, 00:24:27.852 { 00:24:27.852 "name": "BaseBdev4", 00:24:27.852 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:27.852 "is_configured": true, 00:24:27.852 "data_offset": 2048, 00:24:27.852 "data_size": 63488 00:24:27.852 } 00:24:27.852 ] 00:24:27.852 }' 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.852 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.110 [2024-11-08 17:12:04.773169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:28.110 [2024-11-08 17:12:04.773204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:28.110 [2024-11-08 17:12:04.773297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.110 [2024-11-08 17:12:04.773386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.110 [2024-11-08 17:12:04.773398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.110 17:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:28.370 /dev/nbd0 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:28.370 1+0 records in 00:24:28.370 1+0 records out 00:24:28.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553394 s, 7.4 MB/s 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.370 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:28.655 /dev/nbd1 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:28.655 1+0 records in 00:24:28.655 1+0 records out 00:24:28.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377926 s, 10.8 MB/s 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.655 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:28.914 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:28.914 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:28.914 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:28.914 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:28.914 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:28.914 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:28.914 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.172 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.430 [2024-11-08 17:12:05.968003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:29.430 [2024-11-08 17:12:05.968069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.430 [2024-11-08 17:12:05.968097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:29.430 [2024-11-08 17:12:05.968108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.430 [2024-11-08 17:12:05.970604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.430 [2024-11-08 17:12:05.970644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:29.430 [2024-11-08 17:12:05.970766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:29.430 [2024-11-08 17:12:05.970824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:29.430 [2024-11-08 17:12:05.970969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:29.430 [2024-11-08 17:12:05.971079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:29.430 spare 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.430 17:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.430 [2024-11-08 17:12:06.071196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:29.430 [2024-11-08 17:12:06.071251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:29.430 [2024-11-08 17:12:06.071651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:24:29.430 [2024-11-08 17:12:06.071882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:29.430 [2024-11-08 17:12:06.071904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:29.430 [2024-11-08 17:12:06.072108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.430 "name": "raid_bdev1", 00:24:29.430 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:29.430 "strip_size_kb": 0, 00:24:29.430 "state": "online", 00:24:29.430 "raid_level": "raid1", 00:24:29.430 "superblock": true, 00:24:29.430 "num_base_bdevs": 4, 00:24:29.430 "num_base_bdevs_discovered": 3, 00:24:29.430 "num_base_bdevs_operational": 3, 00:24:29.430 "base_bdevs_list": [ 00:24:29.430 { 00:24:29.430 "name": "spare", 00:24:29.430 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:29.430 "is_configured": true, 00:24:29.430 "data_offset": 2048, 00:24:29.430 "data_size": 63488 00:24:29.430 }, 00:24:29.430 { 00:24:29.430 "name": null, 00:24:29.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.430 "is_configured": false, 00:24:29.430 "data_offset": 2048, 00:24:29.430 "data_size": 63488 00:24:29.430 }, 00:24:29.430 { 00:24:29.430 "name": "BaseBdev3", 00:24:29.430 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:29.430 "is_configured": true, 00:24:29.430 "data_offset": 2048, 00:24:29.430 "data_size": 63488 00:24:29.430 }, 00:24:29.430 { 00:24:29.430 "name": "BaseBdev4", 00:24:29.430 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:29.430 "is_configured": true, 00:24:29.430 "data_offset": 2048, 00:24:29.430 "data_size": 63488 00:24:29.430 } 00:24:29.430 ] 00:24:29.430 }' 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.430 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.688 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:29.947 "name": "raid_bdev1", 00:24:29.947 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:29.947 "strip_size_kb": 0, 00:24:29.947 "state": "online", 00:24:29.947 "raid_level": "raid1", 00:24:29.947 "superblock": true, 00:24:29.947 "num_base_bdevs": 4, 00:24:29.947 "num_base_bdevs_discovered": 3, 00:24:29.947 "num_base_bdevs_operational": 3, 00:24:29.947 "base_bdevs_list": [ 00:24:29.947 { 00:24:29.947 "name": "spare", 00:24:29.947 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:29.947 "is_configured": true, 00:24:29.947 "data_offset": 2048, 00:24:29.947 "data_size": 63488 00:24:29.947 }, 00:24:29.947 { 00:24:29.947 "name": null, 00:24:29.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.947 "is_configured": false, 00:24:29.947 "data_offset": 2048, 00:24:29.947 "data_size": 63488 00:24:29.947 }, 00:24:29.947 { 00:24:29.947 "name": "BaseBdev3", 00:24:29.947 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:29.947 "is_configured": true, 00:24:29.947 "data_offset": 2048, 00:24:29.947 "data_size": 63488 00:24:29.947 }, 00:24:29.947 { 00:24:29.947 "name": "BaseBdev4", 00:24:29.947 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:29.947 "is_configured": true, 00:24:29.947 "data_offset": 2048, 00:24:29.947 "data_size": 63488 00:24:29.947 } 00:24:29.947 ] 00:24:29.947 }' 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.947 [2024-11-08 17:12:06.516206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.947 "name": "raid_bdev1", 00:24:29.947 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:29.947 "strip_size_kb": 0, 00:24:29.947 "state": "online", 00:24:29.947 "raid_level": "raid1", 00:24:29.947 "superblock": true, 00:24:29.947 "num_base_bdevs": 4, 00:24:29.947 "num_base_bdevs_discovered": 2, 00:24:29.947 "num_base_bdevs_operational": 2, 00:24:29.947 "base_bdevs_list": [ 00:24:29.947 { 00:24:29.947 "name": null, 00:24:29.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.947 "is_configured": false, 00:24:29.947 "data_offset": 0, 00:24:29.947 "data_size": 63488 00:24:29.947 }, 00:24:29.947 { 00:24:29.947 "name": null, 00:24:29.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.947 "is_configured": false, 00:24:29.947 "data_offset": 2048, 00:24:29.947 "data_size": 63488 00:24:29.947 }, 00:24:29.947 { 00:24:29.947 "name": "BaseBdev3", 00:24:29.947 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:29.947 "is_configured": true, 00:24:29.947 "data_offset": 2048, 00:24:29.947 "data_size": 63488 00:24:29.947 }, 00:24:29.947 { 00:24:29.947 "name": "BaseBdev4", 00:24:29.947 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:29.947 "is_configured": true, 00:24:29.947 "data_offset": 2048, 00:24:29.947 "data_size": 63488 00:24:29.947 } 00:24:29.947 ] 00:24:29.947 }' 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.947 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.210 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:30.210 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.210 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.210 [2024-11-08 17:12:06.860312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:30.210 [2024-11-08 17:12:06.860537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:30.210 [2024-11-08 17:12:06.860559] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:30.210 [2024-11-08 17:12:06.860607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:30.210 [2024-11-08 17:12:06.870625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:24:30.210 17:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.210 17:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:30.210 [2024-11-08 17:12:06.872719] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:31.583 "name": "raid_bdev1", 00:24:31.583 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:31.583 "strip_size_kb": 0, 00:24:31.583 "state": "online", 00:24:31.583 "raid_level": "raid1", 00:24:31.583 "superblock": true, 00:24:31.583 "num_base_bdevs": 4, 00:24:31.583 "num_base_bdevs_discovered": 3, 00:24:31.583 "num_base_bdevs_operational": 3, 00:24:31.583 "process": { 00:24:31.583 "type": "rebuild", 00:24:31.583 "target": "spare", 00:24:31.583 "progress": { 00:24:31.583 "blocks": 20480, 00:24:31.583 "percent": 32 00:24:31.583 } 00:24:31.583 }, 00:24:31.583 "base_bdevs_list": [ 00:24:31.583 { 00:24:31.583 "name": "spare", 00:24:31.583 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:31.583 "is_configured": true, 00:24:31.583 "data_offset": 2048, 00:24:31.583 "data_size": 63488 00:24:31.583 }, 00:24:31.583 { 00:24:31.583 "name": null, 00:24:31.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.583 "is_configured": false, 00:24:31.583 "data_offset": 2048, 00:24:31.583 "data_size": 63488 00:24:31.583 }, 00:24:31.583 { 00:24:31.583 "name": "BaseBdev3", 00:24:31.583 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:31.583 "is_configured": true, 00:24:31.583 "data_offset": 2048, 00:24:31.583 "data_size": 63488 00:24:31.583 }, 00:24:31.583 { 00:24:31.583 "name": "BaseBdev4", 00:24:31.583 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:31.583 "is_configured": true, 00:24:31.583 "data_offset": 2048, 00:24:31.583 "data_size": 63488 00:24:31.583 } 00:24:31.583 ] 00:24:31.583 }' 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.583 [2024-11-08 17:12:07.974767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:31.583 [2024-11-08 17:12:07.979781] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:31.583 [2024-11-08 17:12:07.979844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.583 [2024-11-08 17:12:07.979865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:31.583 [2024-11-08 17:12:07.979873] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.583 17:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.583 17:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.583 17:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.583 "name": "raid_bdev1", 00:24:31.583 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:31.584 "strip_size_kb": 0, 00:24:31.584 "state": "online", 00:24:31.584 "raid_level": "raid1", 00:24:31.584 "superblock": true, 00:24:31.584 "num_base_bdevs": 4, 00:24:31.584 "num_base_bdevs_discovered": 2, 00:24:31.584 "num_base_bdevs_operational": 2, 00:24:31.584 "base_bdevs_list": [ 00:24:31.584 { 00:24:31.584 "name": null, 00:24:31.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.584 "is_configured": false, 00:24:31.584 "data_offset": 0, 00:24:31.584 "data_size": 63488 00:24:31.584 }, 00:24:31.584 { 00:24:31.584 "name": null, 00:24:31.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.584 "is_configured": false, 00:24:31.584 "data_offset": 2048, 00:24:31.584 "data_size": 63488 00:24:31.584 }, 00:24:31.584 { 00:24:31.584 "name": "BaseBdev3", 00:24:31.584 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:31.584 "is_configured": true, 00:24:31.584 "data_offset": 2048, 00:24:31.584 "data_size": 63488 00:24:31.584 }, 00:24:31.584 { 00:24:31.584 "name": "BaseBdev4", 00:24:31.584 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:31.584 "is_configured": true, 00:24:31.584 "data_offset": 2048, 00:24:31.584 "data_size": 63488 00:24:31.584 } 00:24:31.584 ] 00:24:31.584 }' 00:24:31.584 17:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.584 17:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.584 17:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:31.584 17:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.584 17:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.841 [2024-11-08 17:12:08.298883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:31.841 [2024-11-08 17:12:08.298963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.841 [2024-11-08 17:12:08.298993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:31.841 [2024-11-08 17:12:08.299003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.841 [2024-11-08 17:12:08.299512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.841 [2024-11-08 17:12:08.299542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:31.841 [2024-11-08 17:12:08.299646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:31.841 [2024-11-08 17:12:08.299665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:31.841 [2024-11-08 17:12:08.299680] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:31.841 [2024-11-08 17:12:08.299714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:31.841 [2024-11-08 17:12:08.309591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:24:31.841 spare 00:24:31.841 17:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.841 17:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:31.841 [2024-11-08 17:12:08.311660] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:32.775 "name": "raid_bdev1", 00:24:32.775 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:32.775 "strip_size_kb": 0, 00:24:32.775 "state": "online", 00:24:32.775 "raid_level": "raid1", 00:24:32.775 "superblock": true, 00:24:32.775 "num_base_bdevs": 4, 00:24:32.775 "num_base_bdevs_discovered": 3, 00:24:32.775 "num_base_bdevs_operational": 3, 00:24:32.775 "process": { 00:24:32.775 "type": "rebuild", 00:24:32.775 "target": "spare", 00:24:32.775 "progress": { 00:24:32.775 "blocks": 20480, 00:24:32.775 "percent": 32 00:24:32.775 } 00:24:32.775 }, 00:24:32.775 "base_bdevs_list": [ 00:24:32.775 { 00:24:32.775 "name": "spare", 00:24:32.775 "uuid": "25b9df00-8ef6-59d7-812d-8179cbe9221d", 00:24:32.775 "is_configured": true, 00:24:32.775 "data_offset": 2048, 00:24:32.775 "data_size": 63488 00:24:32.775 }, 00:24:32.775 { 00:24:32.775 "name": null, 00:24:32.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.775 "is_configured": false, 00:24:32.775 "data_offset": 2048, 00:24:32.775 "data_size": 63488 00:24:32.775 }, 00:24:32.775 { 00:24:32.775 "name": "BaseBdev3", 00:24:32.775 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:32.775 "is_configured": true, 00:24:32.775 "data_offset": 2048, 00:24:32.775 "data_size": 63488 00:24:32.775 }, 00:24:32.775 { 00:24:32.775 "name": "BaseBdev4", 00:24:32.775 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:32.775 "is_configured": true, 00:24:32.775 "data_offset": 2048, 00:24:32.775 "data_size": 63488 00:24:32.775 } 00:24:32.775 ] 00:24:32.775 }' 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.775 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.775 [2024-11-08 17:12:09.417191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.775 [2024-11-08 17:12:09.418786] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:32.775 [2024-11-08 17:12:09.418848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.775 [2024-11-08 17:12:09.418865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.776 [2024-11-08 17:12:09.418875] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.776 "name": "raid_bdev1", 00:24:32.776 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:32.776 "strip_size_kb": 0, 00:24:32.776 "state": "online", 00:24:32.776 "raid_level": "raid1", 00:24:32.776 "superblock": true, 00:24:32.776 "num_base_bdevs": 4, 00:24:32.776 "num_base_bdevs_discovered": 2, 00:24:32.776 "num_base_bdevs_operational": 2, 00:24:32.776 "base_bdevs_list": [ 00:24:32.776 { 00:24:32.776 "name": null, 00:24:32.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.776 "is_configured": false, 00:24:32.776 "data_offset": 0, 00:24:32.776 "data_size": 63488 00:24:32.776 }, 00:24:32.776 { 00:24:32.776 "name": null, 00:24:32.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.776 "is_configured": false, 00:24:32.776 "data_offset": 2048, 00:24:32.776 "data_size": 63488 00:24:32.776 }, 00:24:32.776 { 00:24:32.776 "name": "BaseBdev3", 00:24:32.776 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:32.776 "is_configured": true, 00:24:32.776 "data_offset": 2048, 00:24:32.776 "data_size": 63488 00:24:32.776 }, 00:24:32.776 { 00:24:32.776 "name": "BaseBdev4", 00:24:32.776 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:32.776 "is_configured": true, 00:24:32.776 "data_offset": 2048, 00:24:32.776 "data_size": 63488 00:24:32.776 } 00:24:32.776 ] 00:24:32.776 }' 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.776 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.342 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:33.342 "name": "raid_bdev1", 00:24:33.342 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:33.342 "strip_size_kb": 0, 00:24:33.342 "state": "online", 00:24:33.342 "raid_level": "raid1", 00:24:33.342 "superblock": true, 00:24:33.342 "num_base_bdevs": 4, 00:24:33.342 "num_base_bdevs_discovered": 2, 00:24:33.342 "num_base_bdevs_operational": 2, 00:24:33.342 "base_bdevs_list": [ 00:24:33.342 { 00:24:33.342 "name": null, 00:24:33.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.342 "is_configured": false, 00:24:33.342 "data_offset": 0, 00:24:33.342 "data_size": 63488 00:24:33.342 }, 00:24:33.342 { 00:24:33.342 "name": null, 00:24:33.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.343 "is_configured": false, 00:24:33.343 "data_offset": 2048, 00:24:33.343 "data_size": 63488 00:24:33.343 }, 00:24:33.343 { 00:24:33.343 "name": "BaseBdev3", 00:24:33.343 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:33.343 "is_configured": true, 00:24:33.343 "data_offset": 2048, 00:24:33.343 "data_size": 63488 00:24:33.343 }, 00:24:33.343 { 00:24:33.343 "name": "BaseBdev4", 00:24:33.343 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:33.343 "is_configured": true, 00:24:33.343 "data_offset": 2048, 00:24:33.343 "data_size": 63488 00:24:33.343 } 00:24:33.343 ] 00:24:33.343 }' 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.343 [2024-11-08 17:12:09.870410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:33.343 [2024-11-08 17:12:09.870479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.343 [2024-11-08 17:12:09.870502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:33.343 [2024-11-08 17:12:09.870514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.343 [2024-11-08 17:12:09.870992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.343 [2024-11-08 17:12:09.871021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:33.343 [2024-11-08 17:12:09.871104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:33.343 [2024-11-08 17:12:09.871124] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:33.343 [2024-11-08 17:12:09.871134] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:33.343 [2024-11-08 17:12:09.871149] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:33.343 BaseBdev1 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.343 17:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.274 "name": "raid_bdev1", 00:24:34.274 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:34.274 "strip_size_kb": 0, 00:24:34.274 "state": "online", 00:24:34.274 "raid_level": "raid1", 00:24:34.274 "superblock": true, 00:24:34.274 "num_base_bdevs": 4, 00:24:34.274 "num_base_bdevs_discovered": 2, 00:24:34.274 "num_base_bdevs_operational": 2, 00:24:34.274 "base_bdevs_list": [ 00:24:34.274 { 00:24:34.274 "name": null, 00:24:34.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.274 "is_configured": false, 00:24:34.274 "data_offset": 0, 00:24:34.274 "data_size": 63488 00:24:34.274 }, 00:24:34.274 { 00:24:34.274 "name": null, 00:24:34.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.274 "is_configured": false, 00:24:34.274 "data_offset": 2048, 00:24:34.274 "data_size": 63488 00:24:34.274 }, 00:24:34.274 { 00:24:34.274 "name": "BaseBdev3", 00:24:34.274 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:34.274 "is_configured": true, 00:24:34.274 "data_offset": 2048, 00:24:34.274 "data_size": 63488 00:24:34.274 }, 00:24:34.274 { 00:24:34.274 "name": "BaseBdev4", 00:24:34.274 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:34.274 "is_configured": true, 00:24:34.274 "data_offset": 2048, 00:24:34.274 "data_size": 63488 00:24:34.274 } 00:24:34.274 ] 00:24:34.274 }' 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.274 17:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.532 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:34.532 "name": "raid_bdev1", 00:24:34.533 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:34.533 "strip_size_kb": 0, 00:24:34.533 "state": "online", 00:24:34.533 "raid_level": "raid1", 00:24:34.533 "superblock": true, 00:24:34.533 "num_base_bdevs": 4, 00:24:34.533 "num_base_bdevs_discovered": 2, 00:24:34.533 "num_base_bdevs_operational": 2, 00:24:34.533 "base_bdevs_list": [ 00:24:34.533 { 00:24:34.533 "name": null, 00:24:34.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.533 "is_configured": false, 00:24:34.533 "data_offset": 0, 00:24:34.533 "data_size": 63488 00:24:34.533 }, 00:24:34.533 { 00:24:34.533 "name": null, 00:24:34.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.533 "is_configured": false, 00:24:34.533 "data_offset": 2048, 00:24:34.533 "data_size": 63488 00:24:34.533 }, 00:24:34.533 { 00:24:34.533 "name": "BaseBdev3", 00:24:34.533 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:34.533 "is_configured": true, 00:24:34.533 "data_offset": 2048, 00:24:34.533 "data_size": 63488 00:24:34.533 }, 00:24:34.533 { 00:24:34.533 "name": "BaseBdev4", 00:24:34.533 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:34.533 "is_configured": true, 00:24:34.533 "data_offset": 2048, 00:24:34.533 "data_size": 63488 00:24:34.533 } 00:24:34.533 ] 00:24:34.533 }' 00:24:34.533 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:34.791 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.792 [2024-11-08 17:12:11.294812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:34.792 [2024-11-08 17:12:11.295013] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:34.792 [2024-11-08 17:12:11.295030] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:34.792 request: 00:24:34.792 { 00:24:34.792 "base_bdev": "BaseBdev1", 00:24:34.792 "raid_bdev": "raid_bdev1", 00:24:34.792 "method": "bdev_raid_add_base_bdev", 00:24:34.792 "req_id": 1 00:24:34.792 } 00:24:34.792 Got JSON-RPC error response 00:24:34.792 response: 00:24:34.792 { 00:24:34.792 "code": -22, 00:24:34.792 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:34.792 } 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.792 17:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:35.724 "name": "raid_bdev1", 00:24:35.724 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:35.724 "strip_size_kb": 0, 00:24:35.724 "state": "online", 00:24:35.724 "raid_level": "raid1", 00:24:35.724 "superblock": true, 00:24:35.724 "num_base_bdevs": 4, 00:24:35.724 "num_base_bdevs_discovered": 2, 00:24:35.724 "num_base_bdevs_operational": 2, 00:24:35.724 "base_bdevs_list": [ 00:24:35.724 { 00:24:35.724 "name": null, 00:24:35.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.724 "is_configured": false, 00:24:35.724 "data_offset": 0, 00:24:35.724 "data_size": 63488 00:24:35.724 }, 00:24:35.724 { 00:24:35.724 "name": null, 00:24:35.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.724 "is_configured": false, 00:24:35.724 "data_offset": 2048, 00:24:35.724 "data_size": 63488 00:24:35.724 }, 00:24:35.724 { 00:24:35.724 "name": "BaseBdev3", 00:24:35.724 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:35.724 "is_configured": true, 00:24:35.724 "data_offset": 2048, 00:24:35.724 "data_size": 63488 00:24:35.724 }, 00:24:35.724 { 00:24:35.724 "name": "BaseBdev4", 00:24:35.724 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:35.724 "is_configured": true, 00:24:35.724 "data_offset": 2048, 00:24:35.724 "data_size": 63488 00:24:35.724 } 00:24:35.724 ] 00:24:35.724 }' 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:35.724 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:35.983 "name": "raid_bdev1", 00:24:35.983 "uuid": "5c654eb7-eb15-4087-a42b-39b9fc1cd473", 00:24:35.983 "strip_size_kb": 0, 00:24:35.983 "state": "online", 00:24:35.983 "raid_level": "raid1", 00:24:35.983 "superblock": true, 00:24:35.983 "num_base_bdevs": 4, 00:24:35.983 "num_base_bdevs_discovered": 2, 00:24:35.983 "num_base_bdevs_operational": 2, 00:24:35.983 "base_bdevs_list": [ 00:24:35.983 { 00:24:35.983 "name": null, 00:24:35.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.983 "is_configured": false, 00:24:35.983 "data_offset": 0, 00:24:35.983 "data_size": 63488 00:24:35.983 }, 00:24:35.983 { 00:24:35.983 "name": null, 00:24:35.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.983 "is_configured": false, 00:24:35.983 "data_offset": 2048, 00:24:35.983 "data_size": 63488 00:24:35.983 }, 00:24:35.983 { 00:24:35.983 "name": "BaseBdev3", 00:24:35.983 "uuid": "81d7281a-fa14-51ea-beec-63cd074d4061", 00:24:35.983 "is_configured": true, 00:24:35.983 "data_offset": 2048, 00:24:35.983 "data_size": 63488 00:24:35.983 }, 00:24:35.983 { 00:24:35.983 "name": "BaseBdev4", 00:24:35.983 "uuid": "b2788df1-6a85-5d50-95ed-cb76200763e3", 00:24:35.983 "is_configured": true, 00:24:35.983 "data_offset": 2048, 00:24:35.983 "data_size": 63488 00:24:35.983 } 00:24:35.983 ] 00:24:35.983 }' 00:24:35.983 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76495 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 76495 ']' 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 76495 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76495 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:36.242 killing process with pid 76495 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76495' 00:24:36.242 Received shutdown signal, test time was about 60.000000 seconds 00:24:36.242 00:24:36.242 Latency(us) 00:24:36.242 [2024-11-08T17:12:12.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.242 [2024-11-08T17:12:12.957Z] =================================================================================================================== 00:24:36.242 [2024-11-08T17:12:12.957Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 76495 00:24:36.242 [2024-11-08 17:12:12.770478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:36.242 17:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 76495 00:24:36.242 [2024-11-08 17:12:12.770606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.242 [2024-11-08 17:12:12.770680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:36.242 [2024-11-08 17:12:12.770691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:36.499 [2024-11-08 17:12:13.096321] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:37.431 00:24:37.431 real 0m26.106s 00:24:37.431 user 0m28.894s 00:24:37.431 sys 0m4.192s 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.431 ************************************ 00:24:37.431 END TEST raid_rebuild_test_sb 00:24:37.431 ************************************ 00:24:37.431 17:12:13 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:24:37.431 17:12:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:37.431 17:12:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:37.431 17:12:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:37.431 ************************************ 00:24:37.431 START TEST raid_rebuild_test_io 00:24:37.431 ************************************ 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77259 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77259 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 77259 ']' 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:37.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:37.431 17:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.431 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:37.431 Zero copy mechanism will not be used. 00:24:37.431 [2024-11-08 17:12:13.990336] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:24:37.431 [2024-11-08 17:12:13.990479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77259 ] 00:24:37.689 [2024-11-08 17:12:14.152936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.689 [2024-11-08 17:12:14.272546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.946 [2024-11-08 17:12:14.421975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.946 [2024-11-08 17:12:14.422026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.203 BaseBdev1_malloc 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.203 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.203 [2024-11-08 17:12:14.881363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:38.203 [2024-11-08 17:12:14.881435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.203 [2024-11-08 17:12:14.881460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:38.203 [2024-11-08 17:12:14.881477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.203 [2024-11-08 17:12:14.883792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.203 [2024-11-08 17:12:14.883830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:38.203 BaseBdev1 00:24:38.204 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.204 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:38.204 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:38.204 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.204 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 BaseBdev2_malloc 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 [2024-11-08 17:12:14.923905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:38.462 [2024-11-08 17:12:14.923969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.462 [2024-11-08 17:12:14.923990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:38.462 [2024-11-08 17:12:14.924004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.462 [2024-11-08 17:12:14.926266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.462 [2024-11-08 17:12:14.926304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:38.462 BaseBdev2 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 BaseBdev3_malloc 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 [2024-11-08 17:12:14.980560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:38.462 [2024-11-08 17:12:14.980617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.462 [2024-11-08 17:12:14.980640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:38.462 [2024-11-08 17:12:14.980652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.462 [2024-11-08 17:12:14.982923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.462 [2024-11-08 17:12:14.983068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:38.462 BaseBdev3 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.462 17:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 BaseBdev4_malloc 00:24:38.462 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.462 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:38.462 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.462 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.462 [2024-11-08 17:12:15.024973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:38.462 [2024-11-08 17:12:15.025030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.462 [2024-11-08 17:12:15.025051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:38.462 [2024-11-08 17:12:15.025064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.462 [2024-11-08 17:12:15.027262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.462 [2024-11-08 17:12:15.027300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:38.462 BaseBdev4 00:24:38.462 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.462 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:38.462 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 spare_malloc 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 spare_delay 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 [2024-11-08 17:12:15.079920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:38.463 [2024-11-08 17:12:15.079977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.463 [2024-11-08 17:12:15.079997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:38.463 [2024-11-08 17:12:15.080009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.463 [2024-11-08 17:12:15.082240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.463 [2024-11-08 17:12:15.082377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:38.463 spare 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 [2024-11-08 17:12:15.087973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.463 [2024-11-08 17:12:15.089994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.463 [2024-11-08 17:12:15.090125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:38.463 [2024-11-08 17:12:15.090235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:38.463 [2024-11-08 17:12:15.090345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:38.463 [2024-11-08 17:12:15.090530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:38.463 [2024-11-08 17:12:15.090889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:38.463 [2024-11-08 17:12:15.091117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:38.463 [2024-11-08 17:12:15.091190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:38.463 [2024-11-08 17:12:15.091425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.463 "name": "raid_bdev1", 00:24:38.463 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:38.463 "strip_size_kb": 0, 00:24:38.463 "state": "online", 00:24:38.463 "raid_level": "raid1", 00:24:38.463 "superblock": false, 00:24:38.463 "num_base_bdevs": 4, 00:24:38.463 "num_base_bdevs_discovered": 4, 00:24:38.463 "num_base_bdevs_operational": 4, 00:24:38.463 "base_bdevs_list": [ 00:24:38.463 { 00:24:38.463 "name": "BaseBdev1", 00:24:38.463 "uuid": "fd4fb5b4-9fa7-5af0-9a79-fd495003ef00", 00:24:38.463 "is_configured": true, 00:24:38.463 "data_offset": 0, 00:24:38.463 "data_size": 65536 00:24:38.463 }, 00:24:38.463 { 00:24:38.463 "name": "BaseBdev2", 00:24:38.463 "uuid": "53cc2060-348e-55f0-8d0c-802e8a14cb78", 00:24:38.463 "is_configured": true, 00:24:38.463 "data_offset": 0, 00:24:38.463 "data_size": 65536 00:24:38.463 }, 00:24:38.463 { 00:24:38.463 "name": "BaseBdev3", 00:24:38.463 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:38.463 "is_configured": true, 00:24:38.463 "data_offset": 0, 00:24:38.463 "data_size": 65536 00:24:38.463 }, 00:24:38.463 { 00:24:38.463 "name": "BaseBdev4", 00:24:38.463 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:38.463 "is_configured": true, 00:24:38.463 "data_offset": 0, 00:24:38.463 "data_size": 65536 00:24:38.463 } 00:24:38.463 ] 00:24:38.463 }' 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.463 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.721 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:38.721 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:38.721 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.721 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.721 [2024-11-08 17:12:15.432434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.979 [2024-11-08 17:12:15.508103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.979 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.980 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.980 "name": "raid_bdev1", 00:24:38.980 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:38.980 "strip_size_kb": 0, 00:24:38.980 "state": "online", 00:24:38.980 "raid_level": "raid1", 00:24:38.980 "superblock": false, 00:24:38.980 "num_base_bdevs": 4, 00:24:38.980 "num_base_bdevs_discovered": 3, 00:24:38.980 "num_base_bdevs_operational": 3, 00:24:38.980 "base_bdevs_list": [ 00:24:38.980 { 00:24:38.980 "name": null, 00:24:38.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.980 "is_configured": false, 00:24:38.980 "data_offset": 0, 00:24:38.980 "data_size": 65536 00:24:38.980 }, 00:24:38.980 { 00:24:38.980 "name": "BaseBdev2", 00:24:38.980 "uuid": "53cc2060-348e-55f0-8d0c-802e8a14cb78", 00:24:38.980 "is_configured": true, 00:24:38.980 "data_offset": 0, 00:24:38.980 "data_size": 65536 00:24:38.980 }, 00:24:38.980 { 00:24:38.980 "name": "BaseBdev3", 00:24:38.980 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:38.980 "is_configured": true, 00:24:38.980 "data_offset": 0, 00:24:38.980 "data_size": 65536 00:24:38.980 }, 00:24:38.980 { 00:24:38.980 "name": "BaseBdev4", 00:24:38.980 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:38.980 "is_configured": true, 00:24:38.980 "data_offset": 0, 00:24:38.980 "data_size": 65536 00:24:38.980 } 00:24:38.980 ] 00:24:38.980 }' 00:24:38.980 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.980 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.980 [2024-11-08 17:12:15.630139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:38.980 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:38.980 Zero copy mechanism will not be used. 00:24:38.980 Running I/O for 60 seconds... 00:24:39.238 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:39.238 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.238 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:39.238 [2024-11-08 17:12:15.846951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:39.238 17:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.238 17:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:39.238 [2024-11-08 17:12:15.902780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:39.238 [2024-11-08 17:12:15.905061] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:39.495 [2024-11-08 17:12:16.045497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:39.495 [2024-11-08 17:12:16.156911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:39.495 [2024-11-08 17:12:16.157483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:40.060 [2024-11-08 17:12:16.503092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:40.060 138.00 IOPS, 414.00 MiB/s [2024-11-08T17:12:16.775Z] [2024-11-08 17:12:16.655280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:40.060 [2024-11-08 17:12:16.656035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.318 "name": "raid_bdev1", 00:24:40.318 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:40.318 "strip_size_kb": 0, 00:24:40.318 "state": "online", 00:24:40.318 "raid_level": "raid1", 00:24:40.318 "superblock": false, 00:24:40.318 "num_base_bdevs": 4, 00:24:40.318 "num_base_bdevs_discovered": 4, 00:24:40.318 "num_base_bdevs_operational": 4, 00:24:40.318 "process": { 00:24:40.318 "type": "rebuild", 00:24:40.318 "target": "spare", 00:24:40.318 "progress": { 00:24:40.318 "blocks": 12288, 00:24:40.318 "percent": 18 00:24:40.318 } 00:24:40.318 }, 00:24:40.318 "base_bdevs_list": [ 00:24:40.318 { 00:24:40.318 "name": "spare", 00:24:40.318 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:40.318 "is_configured": true, 00:24:40.318 "data_offset": 0, 00:24:40.318 "data_size": 65536 00:24:40.318 }, 00:24:40.318 { 00:24:40.318 "name": "BaseBdev2", 00:24:40.318 "uuid": "53cc2060-348e-55f0-8d0c-802e8a14cb78", 00:24:40.318 "is_configured": true, 00:24:40.318 "data_offset": 0, 00:24:40.318 "data_size": 65536 00:24:40.318 }, 00:24:40.318 { 00:24:40.318 "name": "BaseBdev3", 00:24:40.318 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:40.318 "is_configured": true, 00:24:40.318 "data_offset": 0, 00:24:40.318 "data_size": 65536 00:24:40.318 }, 00:24:40.318 { 00:24:40.318 "name": "BaseBdev4", 00:24:40.318 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:40.318 "is_configured": true, 00:24:40.318 "data_offset": 0, 00:24:40.318 "data_size": 65536 00:24:40.318 } 00:24:40.318 ] 00:24:40.318 }' 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.318 17:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.318 [2024-11-08 17:12:16.991158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.318 [2024-11-08 17:12:16.991216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:40.578 [2024-11-08 17:12:17.093643] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:40.578 [2024-11-08 17:12:17.104654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.578 [2024-11-08 17:12:17.104717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.578 [2024-11-08 17:12:17.104732] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:40.578 [2024-11-08 17:12:17.138010] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.578 "name": "raid_bdev1", 00:24:40.578 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:40.578 "strip_size_kb": 0, 00:24:40.578 "state": "online", 00:24:40.578 "raid_level": "raid1", 00:24:40.578 "superblock": false, 00:24:40.578 "num_base_bdevs": 4, 00:24:40.578 "num_base_bdevs_discovered": 3, 00:24:40.578 "num_base_bdevs_operational": 3, 00:24:40.578 "base_bdevs_list": [ 00:24:40.578 { 00:24:40.578 "name": null, 00:24:40.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.578 "is_configured": false, 00:24:40.578 "data_offset": 0, 00:24:40.578 "data_size": 65536 00:24:40.578 }, 00:24:40.578 { 00:24:40.578 "name": "BaseBdev2", 00:24:40.578 "uuid": "53cc2060-348e-55f0-8d0c-802e8a14cb78", 00:24:40.578 "is_configured": true, 00:24:40.578 "data_offset": 0, 00:24:40.578 "data_size": 65536 00:24:40.578 }, 00:24:40.578 { 00:24:40.578 "name": "BaseBdev3", 00:24:40.578 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:40.578 "is_configured": true, 00:24:40.578 "data_offset": 0, 00:24:40.578 "data_size": 65536 00:24:40.578 }, 00:24:40.578 { 00:24:40.578 "name": "BaseBdev4", 00:24:40.578 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:40.578 "is_configured": true, 00:24:40.578 "data_offset": 0, 00:24:40.578 "data_size": 65536 00:24:40.578 } 00:24:40.578 ] 00:24:40.578 }' 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.578 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.837 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.095 "name": "raid_bdev1", 00:24:41.095 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:41.095 "strip_size_kb": 0, 00:24:41.095 "state": "online", 00:24:41.095 "raid_level": "raid1", 00:24:41.095 "superblock": false, 00:24:41.095 "num_base_bdevs": 4, 00:24:41.095 "num_base_bdevs_discovered": 3, 00:24:41.095 "num_base_bdevs_operational": 3, 00:24:41.095 "base_bdevs_list": [ 00:24:41.095 { 00:24:41.095 "name": null, 00:24:41.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.095 "is_configured": false, 00:24:41.095 "data_offset": 0, 00:24:41.095 "data_size": 65536 00:24:41.095 }, 00:24:41.095 { 00:24:41.095 "name": "BaseBdev2", 00:24:41.095 "uuid": "53cc2060-348e-55f0-8d0c-802e8a14cb78", 00:24:41.095 "is_configured": true, 00:24:41.095 "data_offset": 0, 00:24:41.095 "data_size": 65536 00:24:41.095 }, 00:24:41.095 { 00:24:41.095 "name": "BaseBdev3", 00:24:41.095 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:41.095 "is_configured": true, 00:24:41.095 "data_offset": 0, 00:24:41.095 "data_size": 65536 00:24:41.095 }, 00:24:41.095 { 00:24:41.095 "name": "BaseBdev4", 00:24:41.095 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:41.095 "is_configured": true, 00:24:41.095 "data_offset": 0, 00:24:41.095 "data_size": 65536 00:24:41.095 } 00:24:41.095 ] 00:24:41.095 }' 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.095 [2024-11-08 17:12:17.639855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:41.095 130.00 IOPS, 390.00 MiB/s [2024-11-08T17:12:17.810Z] 17:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.095 17:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:41.095 [2024-11-08 17:12:17.689006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:41.095 [2024-11-08 17:12:17.691157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:41.353 [2024-11-08 17:12:17.958791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:41.353 [2024-11-08 17:12:17.959546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:41.627 [2024-11-08 17:12:18.307513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:41.884 [2024-11-08 17:12:18.418650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:42.142 121.67 IOPS, 365.00 MiB/s [2024-11-08T17:12:18.857Z] 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.142 "name": "raid_bdev1", 00:24:42.142 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:42.142 "strip_size_kb": 0, 00:24:42.142 "state": "online", 00:24:42.142 "raid_level": "raid1", 00:24:42.142 "superblock": false, 00:24:42.142 "num_base_bdevs": 4, 00:24:42.142 "num_base_bdevs_discovered": 4, 00:24:42.142 "num_base_bdevs_operational": 4, 00:24:42.142 "process": { 00:24:42.142 "type": "rebuild", 00:24:42.142 "target": "spare", 00:24:42.142 "progress": { 00:24:42.142 "blocks": 12288, 00:24:42.142 "percent": 18 00:24:42.142 } 00:24:42.142 }, 00:24:42.142 "base_bdevs_list": [ 00:24:42.142 { 00:24:42.142 "name": "spare", 00:24:42.142 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:42.142 "is_configured": true, 00:24:42.142 "data_offset": 0, 00:24:42.142 "data_size": 65536 00:24:42.142 }, 00:24:42.142 { 00:24:42.142 "name": "BaseBdev2", 00:24:42.142 "uuid": "53cc2060-348e-55f0-8d0c-802e8a14cb78", 00:24:42.142 "is_configured": true, 00:24:42.142 "data_offset": 0, 00:24:42.142 "data_size": 65536 00:24:42.142 }, 00:24:42.142 { 00:24:42.142 "name": "BaseBdev3", 00:24:42.142 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:42.142 "is_configured": true, 00:24:42.142 "data_offset": 0, 00:24:42.142 "data_size": 65536 00:24:42.142 }, 00:24:42.142 { 00:24:42.142 "name": "BaseBdev4", 00:24:42.142 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:42.142 "is_configured": true, 00:24:42.142 "data_offset": 0, 00:24:42.142 "data_size": 65536 00:24:42.142 } 00:24:42.142 ] 00:24:42.142 }' 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.142 [2024-11-08 17:12:18.772573] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.142 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.142 [2024-11-08 17:12:18.809443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:42.400 [2024-11-08 17:12:18.892009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:42.400 [2024-11-08 17:12:18.983852] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:24:42.400 [2024-11-08 17:12:18.984052] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.400 17:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.400 "name": "raid_bdev1", 00:24:42.400 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:42.400 "strip_size_kb": 0, 00:24:42.400 "state": "online", 00:24:42.400 "raid_level": "raid1", 00:24:42.400 "superblock": false, 00:24:42.400 "num_base_bdevs": 4, 00:24:42.400 "num_base_bdevs_discovered": 3, 00:24:42.400 "num_base_bdevs_operational": 3, 00:24:42.400 "process": { 00:24:42.400 "type": "rebuild", 00:24:42.400 "target": "spare", 00:24:42.400 "progress": { 00:24:42.400 "blocks": 16384, 00:24:42.400 "percent": 25 00:24:42.400 } 00:24:42.400 }, 00:24:42.400 "base_bdevs_list": [ 00:24:42.400 { 00:24:42.400 "name": "spare", 00:24:42.400 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:42.400 "is_configured": true, 00:24:42.400 "data_offset": 0, 00:24:42.400 "data_size": 65536 00:24:42.400 }, 00:24:42.400 { 00:24:42.400 "name": null, 00:24:42.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.400 "is_configured": false, 00:24:42.400 "data_offset": 0, 00:24:42.400 "data_size": 65536 00:24:42.400 }, 00:24:42.400 { 00:24:42.400 "name": "BaseBdev3", 00:24:42.400 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:42.400 "is_configured": true, 00:24:42.400 "data_offset": 0, 00:24:42.400 "data_size": 65536 00:24:42.400 }, 00:24:42.400 { 00:24:42.400 "name": "BaseBdev4", 00:24:42.400 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:42.400 "is_configured": true, 00:24:42.400 "data_offset": 0, 00:24:42.400 "data_size": 65536 00:24:42.400 } 00:24:42.400 ] 00:24:42.400 }' 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=427 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:42.400 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.657 "name": "raid_bdev1", 00:24:42.657 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:42.657 "strip_size_kb": 0, 00:24:42.657 "state": "online", 00:24:42.657 "raid_level": "raid1", 00:24:42.657 "superblock": false, 00:24:42.657 "num_base_bdevs": 4, 00:24:42.657 "num_base_bdevs_discovered": 3, 00:24:42.657 "num_base_bdevs_operational": 3, 00:24:42.657 "process": { 00:24:42.657 "type": "rebuild", 00:24:42.657 "target": "spare", 00:24:42.657 "progress": { 00:24:42.657 "blocks": 18432, 00:24:42.657 "percent": 28 00:24:42.657 } 00:24:42.657 }, 00:24:42.657 "base_bdevs_list": [ 00:24:42.657 { 00:24:42.657 "name": "spare", 00:24:42.657 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:42.657 "is_configured": true, 00:24:42.657 "data_offset": 0, 00:24:42.657 "data_size": 65536 00:24:42.657 }, 00:24:42.657 { 00:24:42.657 "name": null, 00:24:42.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.657 "is_configured": false, 00:24:42.657 "data_offset": 0, 00:24:42.657 "data_size": 65536 00:24:42.657 }, 00:24:42.657 { 00:24:42.657 "name": "BaseBdev3", 00:24:42.657 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:42.657 "is_configured": true, 00:24:42.657 "data_offset": 0, 00:24:42.657 "data_size": 65536 00:24:42.657 }, 00:24:42.657 { 00:24:42.657 "name": "BaseBdev4", 00:24:42.657 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:42.657 "is_configured": true, 00:24:42.657 "data_offset": 0, 00:24:42.657 "data_size": 65536 00:24:42.657 } 00:24:42.657 ] 00:24:42.657 }' 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.657 17:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:43.221 118.00 IOPS, 354.00 MiB/s [2024-11-08T17:12:19.936Z] [2024-11-08 17:12:19.683629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:43.477 [2024-11-08 17:12:19.998747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.735 "name": "raid_bdev1", 00:24:43.735 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:43.735 "strip_size_kb": 0, 00:24:43.735 "state": "online", 00:24:43.735 "raid_level": "raid1", 00:24:43.735 "superblock": false, 00:24:43.735 "num_base_bdevs": 4, 00:24:43.735 "num_base_bdevs_discovered": 3, 00:24:43.735 "num_base_bdevs_operational": 3, 00:24:43.735 "process": { 00:24:43.735 "type": "rebuild", 00:24:43.735 "target": "spare", 00:24:43.735 "progress": { 00:24:43.735 "blocks": 36864, 00:24:43.735 "percent": 56 00:24:43.735 } 00:24:43.735 }, 00:24:43.735 "base_bdevs_list": [ 00:24:43.735 { 00:24:43.735 "name": "spare", 00:24:43.735 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:43.735 "is_configured": true, 00:24:43.735 "data_offset": 0, 00:24:43.735 "data_size": 65536 00:24:43.735 }, 00:24:43.735 { 00:24:43.735 "name": null, 00:24:43.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.735 "is_configured": false, 00:24:43.735 "data_offset": 0, 00:24:43.735 "data_size": 65536 00:24:43.735 }, 00:24:43.735 { 00:24:43.735 "name": "BaseBdev3", 00:24:43.735 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:43.735 "is_configured": true, 00:24:43.735 "data_offset": 0, 00:24:43.735 "data_size": 65536 00:24:43.735 }, 00:24:43.735 { 00:24:43.735 "name": "BaseBdev4", 00:24:43.735 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:43.735 "is_configured": true, 00:24:43.735 "data_offset": 0, 00:24:43.735 "data_size": 65536 00:24:43.735 } 00:24:43.735 ] 00:24:43.735 }' 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.735 [2024-11-08 17:12:20.317976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.735 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.736 17:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:43.993 [2024-11-08 17:12:20.521637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:44.250 107.80 IOPS, 323.40 MiB/s [2024-11-08T17:12:20.965Z] [2024-11-08 17:12:20.764161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:44.508 [2024-11-08 17:12:20.989323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:44.765 [2024-11-08 17:12:21.321599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.765 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.765 "name": "raid_bdev1", 00:24:44.765 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:44.765 "strip_size_kb": 0, 00:24:44.765 "state": "online", 00:24:44.765 "raid_level": "raid1", 00:24:44.765 "superblock": false, 00:24:44.765 "num_base_bdevs": 4, 00:24:44.765 "num_base_bdevs_discovered": 3, 00:24:44.765 "num_base_bdevs_operational": 3, 00:24:44.765 "process": { 00:24:44.765 "type": "rebuild", 00:24:44.766 "target": "spare", 00:24:44.766 "progress": { 00:24:44.766 "blocks": 51200, 00:24:44.766 "percent": 78 00:24:44.766 } 00:24:44.766 }, 00:24:44.766 "base_bdevs_list": [ 00:24:44.766 { 00:24:44.766 "name": "spare", 00:24:44.766 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:44.766 "is_configured": true, 00:24:44.766 "data_offset": 0, 00:24:44.766 "data_size": 65536 00:24:44.766 }, 00:24:44.766 { 00:24:44.766 "name": null, 00:24:44.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.766 "is_configured": false, 00:24:44.766 "data_offset": 0, 00:24:44.766 "data_size": 65536 00:24:44.766 }, 00:24:44.766 { 00:24:44.766 "name": "BaseBdev3", 00:24:44.766 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:44.766 "is_configured": true, 00:24:44.766 "data_offset": 0, 00:24:44.766 "data_size": 65536 00:24:44.766 }, 00:24:44.766 { 00:24:44.766 "name": "BaseBdev4", 00:24:44.766 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:44.766 "is_configured": true, 00:24:44.766 "data_offset": 0, 00:24:44.766 "data_size": 65536 00:24:44.766 } 00:24:44.766 ] 00:24:44.766 }' 00:24:44.766 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.766 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.766 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.024 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.024 17:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:45.595 94.33 IOPS, 283.00 MiB/s [2024-11-08T17:12:22.310Z] [2024-11-08 17:12:22.087898] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:45.595 [2024-11-08 17:12:22.187852] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:45.595 [2024-11-08 17:12:22.190333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.855 "name": "raid_bdev1", 00:24:45.855 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:45.855 "strip_size_kb": 0, 00:24:45.855 "state": "online", 00:24:45.855 "raid_level": "raid1", 00:24:45.855 "superblock": false, 00:24:45.855 "num_base_bdevs": 4, 00:24:45.855 "num_base_bdevs_discovered": 3, 00:24:45.855 "num_base_bdevs_operational": 3, 00:24:45.855 "base_bdevs_list": [ 00:24:45.855 { 00:24:45.855 "name": "spare", 00:24:45.855 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:45.855 "is_configured": true, 00:24:45.855 "data_offset": 0, 00:24:45.855 "data_size": 65536 00:24:45.855 }, 00:24:45.855 { 00:24:45.855 "name": null, 00:24:45.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.855 "is_configured": false, 00:24:45.855 "data_offset": 0, 00:24:45.855 "data_size": 65536 00:24:45.855 }, 00:24:45.855 { 00:24:45.855 "name": "BaseBdev3", 00:24:45.855 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:45.855 "is_configured": true, 00:24:45.855 "data_offset": 0, 00:24:45.855 "data_size": 65536 00:24:45.855 }, 00:24:45.855 { 00:24:45.855 "name": "BaseBdev4", 00:24:45.855 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:45.855 "is_configured": true, 00:24:45.855 "data_offset": 0, 00:24:45.855 "data_size": 65536 00:24:45.855 } 00:24:45.855 ] 00:24:45.855 }' 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:45.855 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:46.114 "name": "raid_bdev1", 00:24:46.114 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:46.114 "strip_size_kb": 0, 00:24:46.114 "state": "online", 00:24:46.114 "raid_level": "raid1", 00:24:46.114 "superblock": false, 00:24:46.114 "num_base_bdevs": 4, 00:24:46.114 "num_base_bdevs_discovered": 3, 00:24:46.114 "num_base_bdevs_operational": 3, 00:24:46.114 "base_bdevs_list": [ 00:24:46.114 { 00:24:46.114 "name": "spare", 00:24:46.114 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:46.114 "is_configured": true, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 }, 00:24:46.114 { 00:24:46.114 "name": null, 00:24:46.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.114 "is_configured": false, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 }, 00:24:46.114 { 00:24:46.114 "name": "BaseBdev3", 00:24:46.114 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:46.114 "is_configured": true, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 }, 00:24:46.114 { 00:24:46.114 "name": "BaseBdev4", 00:24:46.114 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:46.114 "is_configured": true, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 } 00:24:46.114 ] 00:24:46.114 }' 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:46.114 86.71 IOPS, 260.14 MiB/s [2024-11-08T17:12:22.829Z] 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:46.114 "name": "raid_bdev1", 00:24:46.114 "uuid": "effc99c4-17e3-4feb-8712-cf1a1430098b", 00:24:46.114 "strip_size_kb": 0, 00:24:46.114 "state": "online", 00:24:46.114 "raid_level": "raid1", 00:24:46.114 "superblock": false, 00:24:46.114 "num_base_bdevs": 4, 00:24:46.114 "num_base_bdevs_discovered": 3, 00:24:46.114 "num_base_bdevs_operational": 3, 00:24:46.114 "base_bdevs_list": [ 00:24:46.114 { 00:24:46.114 "name": "spare", 00:24:46.114 "uuid": "0d7b6501-21b6-5b5c-8422-2aedd570455b", 00:24:46.114 "is_configured": true, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 }, 00:24:46.114 { 00:24:46.114 "name": null, 00:24:46.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.114 "is_configured": false, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 }, 00:24:46.114 { 00:24:46.114 "name": "BaseBdev3", 00:24:46.114 "uuid": "32245c23-7665-5d44-859a-eb433a0d7797", 00:24:46.114 "is_configured": true, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 }, 00:24:46.114 { 00:24:46.114 "name": "BaseBdev4", 00:24:46.114 "uuid": "d656374d-527e-5dfa-b873-67c2666da158", 00:24:46.114 "is_configured": true, 00:24:46.114 "data_offset": 0, 00:24:46.114 "data_size": 65536 00:24:46.114 } 00:24:46.114 ] 00:24:46.114 }' 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:46.114 17:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.373 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:46.373 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.373 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.373 [2024-11-08 17:12:23.008128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.373 [2024-11-08 17:12:23.008290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.631 00:24:46.631 Latency(us) 00:24:46.631 [2024-11-08T17:12:23.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.631 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:46.631 raid_bdev1 : 7.45 83.34 250.01 0.00 0.00 16424.94 316.65 120989.54 00:24:46.631 [2024-11-08T17:12:23.346Z] =================================================================================================================== 00:24:46.631 [2024-11-08T17:12:23.346Z] Total : 83.34 250.01 0.00 0.00 16424.94 316.65 120989.54 00:24:46.631 [2024-11-08 17:12:23.100153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.631 { 00:24:46.631 "results": [ 00:24:46.631 { 00:24:46.631 "job": "raid_bdev1", 00:24:46.631 "core_mask": "0x1", 00:24:46.631 "workload": "randrw", 00:24:46.631 "percentage": 50, 00:24:46.631 "status": "finished", 00:24:46.631 "queue_depth": 2, 00:24:46.631 "io_size": 3145728, 00:24:46.631 "runtime": 7.45166, 00:24:46.631 "iops": 83.33713561810389, 00:24:46.631 "mibps": 250.01140685431164, 00:24:46.631 "io_failed": 0, 00:24:46.631 "io_timeout": 0, 00:24:46.631 "avg_latency_us": 16424.937168338907, 00:24:46.631 "min_latency_us": 316.6523076923077, 00:24:46.631 "max_latency_us": 120989.53846153847 00:24:46.631 } 00:24:46.631 ], 00:24:46.631 "core_count": 1 00:24:46.631 } 00:24:46.631 [2024-11-08 17:12:23.100370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.631 [2024-11-08 17:12:23.100496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.631 [2024-11-08 17:12:23.100627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:46.631 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:46.632 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:24:46.894 /dev/nbd0 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:46.894 1+0 records in 00:24:46.894 1+0 records out 00:24:46.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344029 s, 11.9 MB/s 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:46.894 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:46.894 /dev/nbd1 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:47.152 1+0 records in 00:24:47.152 1+0 records out 00:24:47.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336626 s, 12.2 MB/s 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.152 17:12:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:47.411 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:47.670 /dev/nbd1 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:47.670 1+0 records in 00:24:47.670 1+0 records out 00:24:47.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255199 s, 16.1 MB/s 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.670 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.929 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77259 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 77259 ']' 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 77259 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77259 00:24:48.190 killing process with pid 77259 00:24:48.190 Received shutdown signal, test time was about 9.189943 seconds 00:24:48.190 00:24:48.190 Latency(us) 00:24:48.190 [2024-11-08T17:12:24.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.190 [2024-11-08T17:12:24.905Z] =================================================================================================================== 00:24:48.190 [2024-11-08T17:12:24.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77259' 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 77259 00:24:48.190 17:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 77259 00:24:48.190 [2024-11-08 17:12:24.822392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:48.448 [2024-11-08 17:12:25.102279] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:49.385 ************************************ 00:24:49.385 END TEST raid_rebuild_test_io 00:24:49.385 ************************************ 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:24:49.385 00:24:49.385 real 0m12.004s 00:24:49.385 user 0m14.960s 00:24:49.385 sys 0m1.400s 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.385 17:12:25 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:24:49.385 17:12:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:49.385 17:12:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:49.385 17:12:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:49.385 ************************************ 00:24:49.385 START TEST raid_rebuild_test_sb_io 00:24:49.385 ************************************ 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:49.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77657 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77657 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77657 ']' 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.385 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.386 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.386 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.386 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.386 17:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:49.386 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:49.386 Zero copy mechanism will not be used. 00:24:49.386 [2024-11-08 17:12:26.066890] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:24:49.386 [2024-11-08 17:12:26.067022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77657 ] 00:24:49.644 [2024-11-08 17:12:26.221235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.644 [2024-11-08 17:12:26.337830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.901 [2024-11-08 17:12:26.485964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.901 [2024-11-08 17:12:26.486024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 BaseBdev1_malloc 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 [2024-11-08 17:12:26.946986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:50.467 [2024-11-08 17:12:26.947057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.467 [2024-11-08 17:12:26.947083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:50.467 [2024-11-08 17:12:26.947096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.467 [2024-11-08 17:12:26.949360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.467 [2024-11-08 17:12:26.949398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:50.467 BaseBdev1 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 BaseBdev2_malloc 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 [2024-11-08 17:12:26.984810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:50.467 [2024-11-08 17:12:26.984865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.467 [2024-11-08 17:12:26.984885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:50.467 [2024-11-08 17:12:26.984899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.467 [2024-11-08 17:12:26.987118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.467 [2024-11-08 17:12:26.987153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:50.467 BaseBdev2 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 BaseBdev3_malloc 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 [2024-11-08 17:12:27.034394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:50.467 [2024-11-08 17:12:27.034453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.467 [2024-11-08 17:12:27.034478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:50.467 [2024-11-08 17:12:27.034489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.467 [2024-11-08 17:12:27.036735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.467 [2024-11-08 17:12:27.036909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:50.467 BaseBdev3 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 BaseBdev4_malloc 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 [2024-11-08 17:12:27.076478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:50.467 [2024-11-08 17:12:27.076636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.467 [2024-11-08 17:12:27.076661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:50.467 [2024-11-08 17:12:27.076672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.467 [2024-11-08 17:12:27.079011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.467 [2024-11-08 17:12:27.079119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:50.467 BaseBdev4 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 spare_malloc 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 spare_delay 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 [2024-11-08 17:12:27.126302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:50.467 [2024-11-08 17:12:27.126357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.467 [2024-11-08 17:12:27.126375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:50.467 [2024-11-08 17:12:27.126386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.467 [2024-11-08 17:12:27.128590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.467 [2024-11-08 17:12:27.128718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:50.467 spare 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 [2024-11-08 17:12:27.134346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:50.467 [2024-11-08 17:12:27.136279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.467 [2024-11-08 17:12:27.136347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:50.467 [2024-11-08 17:12:27.136400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:50.467 [2024-11-08 17:12:27.136586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:50.467 [2024-11-08 17:12:27.136603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:50.467 [2024-11-08 17:12:27.136886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:50.467 [2024-11-08 17:12:27.137062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:50.467 [2024-11-08 17:12:27.137077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:50.467 [2024-11-08 17:12:27.137229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:50.467 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.468 "name": "raid_bdev1", 00:24:50.468 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:50.468 "strip_size_kb": 0, 00:24:50.468 "state": "online", 00:24:50.468 "raid_level": "raid1", 00:24:50.468 "superblock": true, 00:24:50.468 "num_base_bdevs": 4, 00:24:50.468 "num_base_bdevs_discovered": 4, 00:24:50.468 "num_base_bdevs_operational": 4, 00:24:50.468 "base_bdevs_list": [ 00:24:50.468 { 00:24:50.468 "name": "BaseBdev1", 00:24:50.468 "uuid": "6f3cfd45-ef2b-54fb-ac04-a4cd26d1cae2", 00:24:50.468 "is_configured": true, 00:24:50.468 "data_offset": 2048, 00:24:50.468 "data_size": 63488 00:24:50.468 }, 00:24:50.468 { 00:24:50.468 "name": "BaseBdev2", 00:24:50.468 "uuid": "bd906745-9b76-59cb-a830-96d8c1a68fd6", 00:24:50.468 "is_configured": true, 00:24:50.468 "data_offset": 2048, 00:24:50.468 "data_size": 63488 00:24:50.468 }, 00:24:50.468 { 00:24:50.468 "name": "BaseBdev3", 00:24:50.468 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:50.468 "is_configured": true, 00:24:50.468 "data_offset": 2048, 00:24:50.468 "data_size": 63488 00:24:50.468 }, 00:24:50.468 { 00:24:50.468 "name": "BaseBdev4", 00:24:50.468 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:50.468 "is_configured": true, 00:24:50.468 "data_offset": 2048, 00:24:50.468 "data_size": 63488 00:24:50.468 } 00:24:50.468 ] 00:24:50.468 }' 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.468 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 [2024-11-08 17:12:27.470856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 [2024-11-08 17:12:27.542446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.033 "name": "raid_bdev1", 00:24:51.033 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:51.033 "strip_size_kb": 0, 00:24:51.033 "state": "online", 00:24:51.033 "raid_level": "raid1", 00:24:51.033 "superblock": true, 00:24:51.033 "num_base_bdevs": 4, 00:24:51.033 "num_base_bdevs_discovered": 3, 00:24:51.033 "num_base_bdevs_operational": 3, 00:24:51.033 "base_bdevs_list": [ 00:24:51.033 { 00:24:51.033 "name": null, 00:24:51.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.033 "is_configured": false, 00:24:51.033 "data_offset": 0, 00:24:51.033 "data_size": 63488 00:24:51.033 }, 00:24:51.033 { 00:24:51.033 "name": "BaseBdev2", 00:24:51.033 "uuid": "bd906745-9b76-59cb-a830-96d8c1a68fd6", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 2048, 00:24:51.033 "data_size": 63488 00:24:51.033 }, 00:24:51.033 { 00:24:51.033 "name": "BaseBdev3", 00:24:51.033 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 2048, 00:24:51.033 "data_size": 63488 00:24:51.033 }, 00:24:51.033 { 00:24:51.033 "name": "BaseBdev4", 00:24:51.033 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 2048, 00:24:51.033 "data_size": 63488 00:24:51.033 } 00:24:51.033 ] 00:24:51.033 }' 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.033 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 [2024-11-08 17:12:27.628349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:51.033 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:51.033 Zero copy mechanism will not be used. 00:24:51.033 Running I/O for 60 seconds... 00:24:51.291 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:51.291 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.291 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.291 [2024-11-08 17:12:27.884676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:51.291 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.291 17:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:51.291 [2024-11-08 17:12:27.948558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:51.291 [2024-11-08 17:12:27.950924] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:51.585 [2024-11-08 17:12:28.060811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:51.585 [2024-11-08 17:12:28.061373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:51.866 [2024-11-08 17:12:28.282309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:51.866 [2024-11-08 17:12:28.282635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:51.866 [2024-11-08 17:12:28.524601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:52.127 133.00 IOPS, 399.00 MiB/s [2024-11-08T17:12:28.842Z] [2024-11-08 17:12:28.665600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:52.127 [2024-11-08 17:12:28.666101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.385 "name": "raid_bdev1", 00:24:52.385 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:52.385 "strip_size_kb": 0, 00:24:52.385 "state": "online", 00:24:52.385 "raid_level": "raid1", 00:24:52.385 "superblock": true, 00:24:52.385 "num_base_bdevs": 4, 00:24:52.385 "num_base_bdevs_discovered": 4, 00:24:52.385 "num_base_bdevs_operational": 4, 00:24:52.385 "process": { 00:24:52.385 "type": "rebuild", 00:24:52.385 "target": "spare", 00:24:52.385 "progress": { 00:24:52.385 "blocks": 12288, 00:24:52.385 "percent": 19 00:24:52.385 } 00:24:52.385 }, 00:24:52.385 "base_bdevs_list": [ 00:24:52.385 { 00:24:52.385 "name": "spare", 00:24:52.385 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:52.385 "is_configured": true, 00:24:52.385 "data_offset": 2048, 00:24:52.385 "data_size": 63488 00:24:52.385 }, 00:24:52.385 { 00:24:52.385 "name": "BaseBdev2", 00:24:52.385 "uuid": "bd906745-9b76-59cb-a830-96d8c1a68fd6", 00:24:52.385 "is_configured": true, 00:24:52.385 "data_offset": 2048, 00:24:52.385 "data_size": 63488 00:24:52.385 }, 00:24:52.385 { 00:24:52.385 "name": "BaseBdev3", 00:24:52.385 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:52.385 "is_configured": true, 00:24:52.385 "data_offset": 2048, 00:24:52.385 "data_size": 63488 00:24:52.385 }, 00:24:52.385 { 00:24:52.385 "name": "BaseBdev4", 00:24:52.385 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:52.385 "is_configured": true, 00:24:52.385 "data_offset": 2048, 00:24:52.385 "data_size": 63488 00:24:52.385 } 00:24:52.385 ] 00:24:52.385 }' 00:24:52.385 17:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.385 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:52.385 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.385 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:52.385 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:52.385 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.385 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.385 [2024-11-08 17:12:29.034089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:52.385 [2024-11-08 17:12:29.035573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:52.643 [2024-11-08 17:12:29.183258] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:52.643 [2024-11-08 17:12:29.195750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.643 [2024-11-08 17:12:29.195910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:52.643 [2024-11-08 17:12:29.195952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:52.643 [2024-11-08 17:12:29.227977] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.643 "name": "raid_bdev1", 00:24:52.643 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:52.643 "strip_size_kb": 0, 00:24:52.643 "state": "online", 00:24:52.643 "raid_level": "raid1", 00:24:52.643 "superblock": true, 00:24:52.643 "num_base_bdevs": 4, 00:24:52.643 "num_base_bdevs_discovered": 3, 00:24:52.643 "num_base_bdevs_operational": 3, 00:24:52.643 "base_bdevs_list": [ 00:24:52.643 { 00:24:52.643 "name": null, 00:24:52.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.643 "is_configured": false, 00:24:52.643 "data_offset": 0, 00:24:52.643 "data_size": 63488 00:24:52.643 }, 00:24:52.643 { 00:24:52.643 "name": "BaseBdev2", 00:24:52.643 "uuid": "bd906745-9b76-59cb-a830-96d8c1a68fd6", 00:24:52.643 "is_configured": true, 00:24:52.643 "data_offset": 2048, 00:24:52.643 "data_size": 63488 00:24:52.643 }, 00:24:52.643 { 00:24:52.643 "name": "BaseBdev3", 00:24:52.643 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:52.643 "is_configured": true, 00:24:52.643 "data_offset": 2048, 00:24:52.643 "data_size": 63488 00:24:52.643 }, 00:24:52.643 { 00:24:52.643 "name": "BaseBdev4", 00:24:52.643 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:52.643 "is_configured": true, 00:24:52.643 "data_offset": 2048, 00:24:52.643 "data_size": 63488 00:24:52.643 } 00:24:52.643 ] 00:24:52.643 }' 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.643 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.901 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.192 "name": "raid_bdev1", 00:24:53.192 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:53.192 "strip_size_kb": 0, 00:24:53.192 "state": "online", 00:24:53.192 "raid_level": "raid1", 00:24:53.192 "superblock": true, 00:24:53.192 "num_base_bdevs": 4, 00:24:53.192 "num_base_bdevs_discovered": 3, 00:24:53.192 "num_base_bdevs_operational": 3, 00:24:53.192 "base_bdevs_list": [ 00:24:53.192 { 00:24:53.192 "name": null, 00:24:53.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.192 "is_configured": false, 00:24:53.192 "data_offset": 0, 00:24:53.192 "data_size": 63488 00:24:53.192 }, 00:24:53.192 { 00:24:53.192 "name": "BaseBdev2", 00:24:53.192 "uuid": "bd906745-9b76-59cb-a830-96d8c1a68fd6", 00:24:53.192 "is_configured": true, 00:24:53.192 "data_offset": 2048, 00:24:53.192 "data_size": 63488 00:24:53.192 }, 00:24:53.192 { 00:24:53.192 "name": "BaseBdev3", 00:24:53.192 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:53.192 "is_configured": true, 00:24:53.192 "data_offset": 2048, 00:24:53.192 "data_size": 63488 00:24:53.192 }, 00:24:53.192 { 00:24:53.192 "name": "BaseBdev4", 00:24:53.192 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:53.192 "is_configured": true, 00:24:53.192 "data_offset": 2048, 00:24:53.192 "data_size": 63488 00:24:53.192 } 00:24:53.192 ] 00:24:53.192 }' 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:53.192 131.00 IOPS, 393.00 MiB/s [2024-11-08T17:12:29.907Z] 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.192 [2024-11-08 17:12:29.692798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.192 17:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:53.192 [2024-11-08 17:12:29.786389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:53.192 [2024-11-08 17:12:29.788801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:53.192 [2024-11-08 17:12:29.899225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:53.192 [2024-11-08 17:12:29.899873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:53.453 [2024-11-08 17:12:30.113534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:53.453 [2024-11-08 17:12:30.114104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:54.023 [2024-11-08 17:12:30.614319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:54.284 119.33 IOPS, 358.00 MiB/s [2024-11-08T17:12:30.999Z] 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:54.284 "name": "raid_bdev1", 00:24:54.284 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:54.284 "strip_size_kb": 0, 00:24:54.284 "state": "online", 00:24:54.284 "raid_level": "raid1", 00:24:54.284 "superblock": true, 00:24:54.284 "num_base_bdevs": 4, 00:24:54.284 "num_base_bdevs_discovered": 4, 00:24:54.284 "num_base_bdevs_operational": 4, 00:24:54.284 "process": { 00:24:54.284 "type": "rebuild", 00:24:54.284 "target": "spare", 00:24:54.284 "progress": { 00:24:54.284 "blocks": 10240, 00:24:54.284 "percent": 16 00:24:54.284 } 00:24:54.284 }, 00:24:54.284 "base_bdevs_list": [ 00:24:54.284 { 00:24:54.284 "name": "spare", 00:24:54.284 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:54.284 "is_configured": true, 00:24:54.284 "data_offset": 2048, 00:24:54.284 "data_size": 63488 00:24:54.284 }, 00:24:54.284 { 00:24:54.284 "name": "BaseBdev2", 00:24:54.284 "uuid": "bd906745-9b76-59cb-a830-96d8c1a68fd6", 00:24:54.284 "is_configured": true, 00:24:54.284 "data_offset": 2048, 00:24:54.284 "data_size": 63488 00:24:54.284 }, 00:24:54.284 { 00:24:54.284 "name": "BaseBdev3", 00:24:54.284 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:54.284 "is_configured": true, 00:24:54.284 "data_offset": 2048, 00:24:54.284 "data_size": 63488 00:24:54.284 }, 00:24:54.284 { 00:24:54.284 "name": "BaseBdev4", 00:24:54.284 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:54.284 "is_configured": true, 00:24:54.284 "data_offset": 2048, 00:24:54.284 "data_size": 63488 00:24:54.284 } 00:24:54.284 ] 00:24:54.284 }' 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:54.284 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.284 17:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.284 [2024-11-08 17:12:30.844148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:54.284 [2024-11-08 17:12:30.950441] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:54.544 [2024-11-08 17:12:31.157072] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:24:54.544 [2024-11-08 17:12:31.157308] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:24:54.544 [2024-11-08 17:12:31.160778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.544 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:54.544 "name": "raid_bdev1", 00:24:54.544 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:54.544 "strip_size_kb": 0, 00:24:54.544 "state": "online", 00:24:54.544 "raid_level": "raid1", 00:24:54.544 "superblock": true, 00:24:54.544 "num_base_bdevs": 4, 00:24:54.544 "num_base_bdevs_discovered": 3, 00:24:54.544 "num_base_bdevs_operational": 3, 00:24:54.544 "process": { 00:24:54.544 "type": "rebuild", 00:24:54.544 "target": "spare", 00:24:54.544 "progress": { 00:24:54.544 "blocks": 14336, 00:24:54.544 "percent": 22 00:24:54.544 } 00:24:54.544 }, 00:24:54.544 "base_bdevs_list": [ 00:24:54.544 { 00:24:54.544 "name": "spare", 00:24:54.544 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:54.544 "is_configured": true, 00:24:54.544 "data_offset": 2048, 00:24:54.544 "data_size": 63488 00:24:54.544 }, 00:24:54.544 { 00:24:54.544 "name": null, 00:24:54.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.544 "is_configured": false, 00:24:54.544 "data_offset": 0, 00:24:54.544 "data_size": 63488 00:24:54.544 }, 00:24:54.544 { 00:24:54.544 "name": "BaseBdev3", 00:24:54.544 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:54.545 "is_configured": true, 00:24:54.545 "data_offset": 2048, 00:24:54.545 "data_size": 63488 00:24:54.545 }, 00:24:54.545 { 00:24:54.545 "name": "BaseBdev4", 00:24:54.545 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:54.545 "is_configured": true, 00:24:54.545 "data_offset": 2048, 00:24:54.545 "data_size": 63488 00:24:54.545 } 00:24:54.545 ] 00:24:54.545 }' 00:24:54.545 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:54.545 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.545 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=439 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.806 [2024-11-08 17:12:31.284312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:54.806 [2024-11-08 17:12:31.284687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:54.806 "name": "raid_bdev1", 00:24:54.806 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:54.806 "strip_size_kb": 0, 00:24:54.806 "state": "online", 00:24:54.806 "raid_level": "raid1", 00:24:54.806 "superblock": true, 00:24:54.806 "num_base_bdevs": 4, 00:24:54.806 "num_base_bdevs_discovered": 3, 00:24:54.806 "num_base_bdevs_operational": 3, 00:24:54.806 "process": { 00:24:54.806 "type": "rebuild", 00:24:54.806 "target": "spare", 00:24:54.806 "progress": { 00:24:54.806 "blocks": 14336, 00:24:54.806 "percent": 22 00:24:54.806 } 00:24:54.806 }, 00:24:54.806 "base_bdevs_list": [ 00:24:54.806 { 00:24:54.806 "name": "spare", 00:24:54.806 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:54.806 "is_configured": true, 00:24:54.806 "data_offset": 2048, 00:24:54.806 "data_size": 63488 00:24:54.806 }, 00:24:54.806 { 00:24:54.806 "name": null, 00:24:54.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.806 "is_configured": false, 00:24:54.806 "data_offset": 0, 00:24:54.806 "data_size": 63488 00:24:54.806 }, 00:24:54.806 { 00:24:54.806 "name": "BaseBdev3", 00:24:54.806 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:54.806 "is_configured": true, 00:24:54.806 "data_offset": 2048, 00:24:54.806 "data_size": 63488 00:24:54.806 }, 00:24:54.806 { 00:24:54.806 "name": "BaseBdev4", 00:24:54.806 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:54.806 "is_configured": true, 00:24:54.806 "data_offset": 2048, 00:24:54.806 "data_size": 63488 00:24:54.806 } 00:24:54.806 ] 00:24:54.806 }' 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.806 17:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:55.637 100.50 IOPS, 301.50 MiB/s [2024-11-08T17:12:32.352Z] [2024-11-08 17:12:32.055172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:55.637 [2024-11-08 17:12:32.055680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:55.899 "name": "raid_bdev1", 00:24:55.899 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:55.899 "strip_size_kb": 0, 00:24:55.899 "state": "online", 00:24:55.899 "raid_level": "raid1", 00:24:55.899 "superblock": true, 00:24:55.899 "num_base_bdevs": 4, 00:24:55.899 "num_base_bdevs_discovered": 3, 00:24:55.899 "num_base_bdevs_operational": 3, 00:24:55.899 "process": { 00:24:55.899 "type": "rebuild", 00:24:55.899 "target": "spare", 00:24:55.899 "progress": { 00:24:55.899 "blocks": 30720, 00:24:55.899 "percent": 48 00:24:55.899 } 00:24:55.899 }, 00:24:55.899 "base_bdevs_list": [ 00:24:55.899 { 00:24:55.899 "name": "spare", 00:24:55.899 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:55.899 "is_configured": true, 00:24:55.899 "data_offset": 2048, 00:24:55.899 "data_size": 63488 00:24:55.899 }, 00:24:55.899 { 00:24:55.899 "name": null, 00:24:55.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.899 "is_configured": false, 00:24:55.899 "data_offset": 0, 00:24:55.899 "data_size": 63488 00:24:55.899 }, 00:24:55.899 { 00:24:55.899 "name": "BaseBdev3", 00:24:55.899 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:55.899 "is_configured": true, 00:24:55.899 "data_offset": 2048, 00:24:55.899 "data_size": 63488 00:24:55.899 }, 00:24:55.899 { 00:24:55.899 "name": "BaseBdev4", 00:24:55.899 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:55.899 "is_configured": true, 00:24:55.899 "data_offset": 2048, 00:24:55.899 "data_size": 63488 00:24:55.899 } 00:24:55.899 ] 00:24:55.899 }' 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:55.899 [2024-11-08 17:12:32.428788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.899 17:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:56.160 91.20 IOPS, 273.60 MiB/s [2024-11-08T17:12:32.875Z] [2024-11-08 17:12:32.653773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:56.160 [2024-11-08 17:12:32.654358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:56.431 [2024-11-08 17:12:32.983858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:56.712 [2024-11-08 17:12:33.195152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:56.970 "name": "raid_bdev1", 00:24:56.970 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:56.970 "strip_size_kb": 0, 00:24:56.970 "state": "online", 00:24:56.970 "raid_level": "raid1", 00:24:56.970 "superblock": true, 00:24:56.970 "num_base_bdevs": 4, 00:24:56.970 "num_base_bdevs_discovered": 3, 00:24:56.970 "num_base_bdevs_operational": 3, 00:24:56.970 "process": { 00:24:56.970 "type": "rebuild", 00:24:56.970 "target": "spare", 00:24:56.970 "progress": { 00:24:56.970 "blocks": 45056, 00:24:56.970 "percent": 70 00:24:56.970 } 00:24:56.970 }, 00:24:56.970 "base_bdevs_list": [ 00:24:56.970 { 00:24:56.970 "name": "spare", 00:24:56.970 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:56.970 "is_configured": true, 00:24:56.970 "data_offset": 2048, 00:24:56.970 "data_size": 63488 00:24:56.970 }, 00:24:56.970 { 00:24:56.970 "name": null, 00:24:56.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.970 "is_configured": false, 00:24:56.970 "data_offset": 0, 00:24:56.970 "data_size": 63488 00:24:56.970 }, 00:24:56.970 { 00:24:56.970 "name": "BaseBdev3", 00:24:56.970 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:56.970 "is_configured": true, 00:24:56.970 "data_offset": 2048, 00:24:56.970 "data_size": 63488 00:24:56.970 }, 00:24:56.970 { 00:24:56.970 "name": "BaseBdev4", 00:24:56.970 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:56.970 "is_configured": true, 00:24:56.970 "data_offset": 2048, 00:24:56.970 "data_size": 63488 00:24:56.970 } 00:24:56.970 ] 00:24:56.970 }' 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.970 17:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:57.905 84.17 IOPS, 252.50 MiB/s [2024-11-08T17:12:34.620Z] [2024-11-08 17:12:34.417815] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:57.905 [2024-11-08 17:12:34.517845] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:57.905 [2024-11-08 17:12:34.520784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:57.905 "name": "raid_bdev1", 00:24:57.905 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:57.905 "strip_size_kb": 0, 00:24:57.905 "state": "online", 00:24:57.905 "raid_level": "raid1", 00:24:57.905 "superblock": true, 00:24:57.905 "num_base_bdevs": 4, 00:24:57.905 "num_base_bdevs_discovered": 3, 00:24:57.905 "num_base_bdevs_operational": 3, 00:24:57.905 "base_bdevs_list": [ 00:24:57.905 { 00:24:57.905 "name": "spare", 00:24:57.905 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:57.905 "is_configured": true, 00:24:57.905 "data_offset": 2048, 00:24:57.905 "data_size": 63488 00:24:57.905 }, 00:24:57.905 { 00:24:57.905 "name": null, 00:24:57.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.905 "is_configured": false, 00:24:57.905 "data_offset": 0, 00:24:57.905 "data_size": 63488 00:24:57.905 }, 00:24:57.905 { 00:24:57.905 "name": "BaseBdev3", 00:24:57.905 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:57.905 "is_configured": true, 00:24:57.905 "data_offset": 2048, 00:24:57.905 "data_size": 63488 00:24:57.905 }, 00:24:57.905 { 00:24:57.905 "name": "BaseBdev4", 00:24:57.905 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:57.905 "is_configured": true, 00:24:57.905 "data_offset": 2048, 00:24:57.905 "data_size": 63488 00:24:57.905 } 00:24:57.905 ] 00:24:57.905 }' 00:24:57.905 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.165 78.14 IOPS, 234.43 MiB/s [2024-11-08T17:12:34.880Z] 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.165 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.165 "name": "raid_bdev1", 00:24:58.165 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:58.165 "strip_size_kb": 0, 00:24:58.165 "state": "online", 00:24:58.165 "raid_level": "raid1", 00:24:58.165 "superblock": true, 00:24:58.165 "num_base_bdevs": 4, 00:24:58.165 "num_base_bdevs_discovered": 3, 00:24:58.165 "num_base_bdevs_operational": 3, 00:24:58.165 "base_bdevs_list": [ 00:24:58.165 { 00:24:58.165 "name": "spare", 00:24:58.165 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:58.165 "is_configured": true, 00:24:58.165 "data_offset": 2048, 00:24:58.165 "data_size": 63488 00:24:58.165 }, 00:24:58.165 { 00:24:58.165 "name": null, 00:24:58.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.165 "is_configured": false, 00:24:58.165 "data_offset": 0, 00:24:58.165 "data_size": 63488 00:24:58.165 }, 00:24:58.165 { 00:24:58.165 "name": "BaseBdev3", 00:24:58.165 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:58.165 "is_configured": true, 00:24:58.165 "data_offset": 2048, 00:24:58.165 "data_size": 63488 00:24:58.166 }, 00:24:58.166 { 00:24:58.166 "name": "BaseBdev4", 00:24:58.166 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:58.166 "is_configured": true, 00:24:58.166 "data_offset": 2048, 00:24:58.166 "data_size": 63488 00:24:58.166 } 00:24:58.166 ] 00:24:58.166 }' 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.166 "name": "raid_bdev1", 00:24:58.166 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:24:58.166 "strip_size_kb": 0, 00:24:58.166 "state": "online", 00:24:58.166 "raid_level": "raid1", 00:24:58.166 "superblock": true, 00:24:58.166 "num_base_bdevs": 4, 00:24:58.166 "num_base_bdevs_discovered": 3, 00:24:58.166 "num_base_bdevs_operational": 3, 00:24:58.166 "base_bdevs_list": [ 00:24:58.166 { 00:24:58.166 "name": "spare", 00:24:58.166 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:24:58.166 "is_configured": true, 00:24:58.166 "data_offset": 2048, 00:24:58.166 "data_size": 63488 00:24:58.166 }, 00:24:58.166 { 00:24:58.166 "name": null, 00:24:58.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.166 "is_configured": false, 00:24:58.166 "data_offset": 0, 00:24:58.166 "data_size": 63488 00:24:58.166 }, 00:24:58.166 { 00:24:58.166 "name": "BaseBdev3", 00:24:58.166 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:24:58.166 "is_configured": true, 00:24:58.166 "data_offset": 2048, 00:24:58.166 "data_size": 63488 00:24:58.166 }, 00:24:58.166 { 00:24:58.166 "name": "BaseBdev4", 00:24:58.166 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:24:58.166 "is_configured": true, 00:24:58.166 "data_offset": 2048, 00:24:58.166 "data_size": 63488 00:24:58.166 } 00:24:58.166 ] 00:24:58.166 }' 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.166 17:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.467 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:58.467 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.467 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.467 [2024-11-08 17:12:35.091858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:58.467 [2024-11-08 17:12:35.091901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:58.727 00:24:58.727 Latency(us) 00:24:58.727 [2024-11-08T17:12:35.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.727 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:58.727 raid_bdev1 : 7.55 75.86 227.59 0.00 0.00 17435.49 300.90 120182.94 00:24:58.727 [2024-11-08T17:12:35.442Z] =================================================================================================================== 00:24:58.727 [2024-11-08T17:12:35.442Z] Total : 75.86 227.59 0.00 0.00 17435.49 300.90 120182.94 00:24:58.727 { 00:24:58.727 "results": [ 00:24:58.727 { 00:24:58.727 "job": "raid_bdev1", 00:24:58.727 "core_mask": "0x1", 00:24:58.727 "workload": "randrw", 00:24:58.727 "percentage": 50, 00:24:58.727 "status": "finished", 00:24:58.727 "queue_depth": 2, 00:24:58.727 "io_size": 3145728, 00:24:58.727 "runtime": 7.553163, 00:24:58.727 "iops": 75.86225797060119, 00:24:58.727 "mibps": 227.58677391180356, 00:24:58.727 "io_failed": 0, 00:24:58.727 "io_timeout": 0, 00:24:58.727 "avg_latency_us": 17435.48537521815, 00:24:58.727 "min_latency_us": 300.89846153846156, 00:24:58.727 "max_latency_us": 120182.94153846154 00:24:58.727 } 00:24:58.727 ], 00:24:58.727 "core_count": 1 00:24:58.727 } 00:24:58.727 [2024-11-08 17:12:35.200086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.727 [2024-11-08 17:12:35.200162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:58.727 [2024-11-08 17:12:35.200282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:58.727 [2024-11-08 17:12:35.200297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:58.727 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:58.728 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:58.728 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:58.728 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:58.728 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:58.728 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:24:58.988 /dev/nbd0 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:58.988 1+0 records in 00:24:58.988 1+0 records out 00:24:58.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617836 s, 6.6 MB/s 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:58.988 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:59.247 /dev/nbd1 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:59.247 1+0 records in 00:24:59.247 1+0 records out 00:24:59.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035454 s, 11.6 MB/s 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:59.247 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:59.507 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:59.507 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:59.507 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:59.507 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:59.507 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:59.507 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:59.507 17:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:59.769 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:59.769 /dev/nbd1 00:25:00.030 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:00.030 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.031 1+0 records in 00:25:00.031 1+0 records out 00:25:00.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513556 s, 8.0 MB/s 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:00.031 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:00.296 17:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:00.595 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:00.595 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.596 [2024-11-08 17:12:37.060149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:00.596 [2024-11-08 17:12:37.060225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.596 [2024-11-08 17:12:37.060253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:00.596 [2024-11-08 17:12:37.060266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.596 [2024-11-08 17:12:37.063127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.596 [2024-11-08 17:12:37.063171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:00.596 [2024-11-08 17:12:37.063283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:00.596 [2024-11-08 17:12:37.063341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:00.596 [2024-11-08 17:12:37.063484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:00.596 [2024-11-08 17:12:37.063591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:00.596 spare 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.596 [2024-11-08 17:12:37.163693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:00.596 [2024-11-08 17:12:37.163784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:00.596 [2024-11-08 17:12:37.164198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:25:00.596 [2024-11-08 17:12:37.164416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:00.596 [2024-11-08 17:12:37.164432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:00.596 [2024-11-08 17:12:37.164653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:00.596 "name": "raid_bdev1", 00:25:00.596 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:00.596 "strip_size_kb": 0, 00:25:00.596 "state": "online", 00:25:00.596 "raid_level": "raid1", 00:25:00.596 "superblock": true, 00:25:00.596 "num_base_bdevs": 4, 00:25:00.596 "num_base_bdevs_discovered": 3, 00:25:00.596 "num_base_bdevs_operational": 3, 00:25:00.596 "base_bdevs_list": [ 00:25:00.596 { 00:25:00.596 "name": "spare", 00:25:00.596 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:25:00.596 "is_configured": true, 00:25:00.596 "data_offset": 2048, 00:25:00.596 "data_size": 63488 00:25:00.596 }, 00:25:00.596 { 00:25:00.596 "name": null, 00:25:00.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.596 "is_configured": false, 00:25:00.596 "data_offset": 2048, 00:25:00.596 "data_size": 63488 00:25:00.596 }, 00:25:00.596 { 00:25:00.596 "name": "BaseBdev3", 00:25:00.596 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:00.596 "is_configured": true, 00:25:00.596 "data_offset": 2048, 00:25:00.596 "data_size": 63488 00:25:00.596 }, 00:25:00.596 { 00:25:00.596 "name": "BaseBdev4", 00:25:00.596 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:00.596 "is_configured": true, 00:25:00.596 "data_offset": 2048, 00:25:00.596 "data_size": 63488 00:25:00.596 } 00:25:00.596 ] 00:25:00.596 }' 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:00.596 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.857 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:00.857 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:00.857 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:00.857 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:00.857 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:00.858 "name": "raid_bdev1", 00:25:00.858 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:00.858 "strip_size_kb": 0, 00:25:00.858 "state": "online", 00:25:00.858 "raid_level": "raid1", 00:25:00.858 "superblock": true, 00:25:00.858 "num_base_bdevs": 4, 00:25:00.858 "num_base_bdevs_discovered": 3, 00:25:00.858 "num_base_bdevs_operational": 3, 00:25:00.858 "base_bdevs_list": [ 00:25:00.858 { 00:25:00.858 "name": "spare", 00:25:00.858 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:25:00.858 "is_configured": true, 00:25:00.858 "data_offset": 2048, 00:25:00.858 "data_size": 63488 00:25:00.858 }, 00:25:00.858 { 00:25:00.858 "name": null, 00:25:00.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.858 "is_configured": false, 00:25:00.858 "data_offset": 2048, 00:25:00.858 "data_size": 63488 00:25:00.858 }, 00:25:00.858 { 00:25:00.858 "name": "BaseBdev3", 00:25:00.858 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:00.858 "is_configured": true, 00:25:00.858 "data_offset": 2048, 00:25:00.858 "data_size": 63488 00:25:00.858 }, 00:25:00.858 { 00:25:00.858 "name": "BaseBdev4", 00:25:00.858 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:00.858 "is_configured": true, 00:25:00.858 "data_offset": 2048, 00:25:00.858 "data_size": 63488 00:25:00.858 } 00:25:00.858 ] 00:25:00.858 }' 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:00.858 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.119 [2024-11-08 17:12:37.612821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.119 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.120 "name": "raid_bdev1", 00:25:01.120 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:01.120 "strip_size_kb": 0, 00:25:01.120 "state": "online", 00:25:01.120 "raid_level": "raid1", 00:25:01.120 "superblock": true, 00:25:01.120 "num_base_bdevs": 4, 00:25:01.120 "num_base_bdevs_discovered": 2, 00:25:01.120 "num_base_bdevs_operational": 2, 00:25:01.120 "base_bdevs_list": [ 00:25:01.120 { 00:25:01.120 "name": null, 00:25:01.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.120 "is_configured": false, 00:25:01.120 "data_offset": 0, 00:25:01.120 "data_size": 63488 00:25:01.120 }, 00:25:01.120 { 00:25:01.120 "name": null, 00:25:01.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.120 "is_configured": false, 00:25:01.120 "data_offset": 2048, 00:25:01.120 "data_size": 63488 00:25:01.120 }, 00:25:01.120 { 00:25:01.120 "name": "BaseBdev3", 00:25:01.120 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:01.120 "is_configured": true, 00:25:01.120 "data_offset": 2048, 00:25:01.120 "data_size": 63488 00:25:01.120 }, 00:25:01.120 { 00:25:01.120 "name": "BaseBdev4", 00:25:01.120 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:01.120 "is_configured": true, 00:25:01.120 "data_offset": 2048, 00:25:01.120 "data_size": 63488 00:25:01.120 } 00:25:01.120 ] 00:25:01.120 }' 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.120 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.382 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:01.382 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.382 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.382 [2024-11-08 17:12:37.937002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:01.382 [2024-11-08 17:12:37.937459] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:01.382 [2024-11-08 17:12:37.937586] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:01.382 [2024-11-08 17:12:37.937661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:01.382 [2024-11-08 17:12:37.948878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:25:01.382 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.382 17:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:01.382 [2024-11-08 17:12:37.951441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:02.339 "name": "raid_bdev1", 00:25:02.339 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:02.339 "strip_size_kb": 0, 00:25:02.339 "state": "online", 00:25:02.339 "raid_level": "raid1", 00:25:02.339 "superblock": true, 00:25:02.339 "num_base_bdevs": 4, 00:25:02.339 "num_base_bdevs_discovered": 3, 00:25:02.339 "num_base_bdevs_operational": 3, 00:25:02.339 "process": { 00:25:02.339 "type": "rebuild", 00:25:02.339 "target": "spare", 00:25:02.339 "progress": { 00:25:02.339 "blocks": 20480, 00:25:02.339 "percent": 32 00:25:02.339 } 00:25:02.339 }, 00:25:02.339 "base_bdevs_list": [ 00:25:02.339 { 00:25:02.339 "name": "spare", 00:25:02.339 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:25:02.339 "is_configured": true, 00:25:02.339 "data_offset": 2048, 00:25:02.339 "data_size": 63488 00:25:02.339 }, 00:25:02.339 { 00:25:02.339 "name": null, 00:25:02.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.339 "is_configured": false, 00:25:02.339 "data_offset": 2048, 00:25:02.339 "data_size": 63488 00:25:02.339 }, 00:25:02.339 { 00:25:02.339 "name": "BaseBdev3", 00:25:02.339 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:02.339 "is_configured": true, 00:25:02.339 "data_offset": 2048, 00:25:02.339 "data_size": 63488 00:25:02.339 }, 00:25:02.339 { 00:25:02.339 "name": "BaseBdev4", 00:25:02.339 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:02.339 "is_configured": true, 00:25:02.339 "data_offset": 2048, 00:25:02.339 "data_size": 63488 00:25:02.339 } 00:25:02.339 ] 00:25:02.339 }' 00:25:02.339 17:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:02.339 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:02.340 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:02.340 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:02.340 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:02.340 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.340 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.601 [2024-11-08 17:12:39.058089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:02.601 [2024-11-08 17:12:39.062871] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:02.601 [2024-11-08 17:12:39.062961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.601 [2024-11-08 17:12:39.062982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:02.601 [2024-11-08 17:12:39.063000] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.601 "name": "raid_bdev1", 00:25:02.601 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:02.601 "strip_size_kb": 0, 00:25:02.601 "state": "online", 00:25:02.601 "raid_level": "raid1", 00:25:02.601 "superblock": true, 00:25:02.601 "num_base_bdevs": 4, 00:25:02.601 "num_base_bdevs_discovered": 2, 00:25:02.601 "num_base_bdevs_operational": 2, 00:25:02.601 "base_bdevs_list": [ 00:25:02.601 { 00:25:02.601 "name": null, 00:25:02.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.601 "is_configured": false, 00:25:02.601 "data_offset": 0, 00:25:02.601 "data_size": 63488 00:25:02.601 }, 00:25:02.601 { 00:25:02.601 "name": null, 00:25:02.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.601 "is_configured": false, 00:25:02.601 "data_offset": 2048, 00:25:02.601 "data_size": 63488 00:25:02.601 }, 00:25:02.601 { 00:25:02.601 "name": "BaseBdev3", 00:25:02.601 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:02.601 "is_configured": true, 00:25:02.601 "data_offset": 2048, 00:25:02.601 "data_size": 63488 00:25:02.601 }, 00:25:02.601 { 00:25:02.601 "name": "BaseBdev4", 00:25:02.601 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:02.601 "is_configured": true, 00:25:02.601 "data_offset": 2048, 00:25:02.601 "data_size": 63488 00:25:02.601 } 00:25:02.601 ] 00:25:02.601 }' 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.601 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.862 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:02.862 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.862 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.862 [2024-11-08 17:12:39.423021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:02.862 [2024-11-08 17:12:39.423128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.862 [2024-11-08 17:12:39.423168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:02.862 [2024-11-08 17:12:39.423182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.862 [2024-11-08 17:12:39.423892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.862 [2024-11-08 17:12:39.423925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:02.862 [2024-11-08 17:12:39.424062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:02.862 [2024-11-08 17:12:39.424081] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:02.862 [2024-11-08 17:12:39.424095] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:02.862 [2024-11-08 17:12:39.424129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:02.862 [2024-11-08 17:12:39.435475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:25:02.862 spare 00:25:02.862 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.862 17:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:02.862 [2024-11-08 17:12:39.437939] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:03.801 "name": "raid_bdev1", 00:25:03.801 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:03.801 "strip_size_kb": 0, 00:25:03.801 "state": "online", 00:25:03.801 "raid_level": "raid1", 00:25:03.801 "superblock": true, 00:25:03.801 "num_base_bdevs": 4, 00:25:03.801 "num_base_bdevs_discovered": 3, 00:25:03.801 "num_base_bdevs_operational": 3, 00:25:03.801 "process": { 00:25:03.801 "type": "rebuild", 00:25:03.801 "target": "spare", 00:25:03.801 "progress": { 00:25:03.801 "blocks": 20480, 00:25:03.801 "percent": 32 00:25:03.801 } 00:25:03.801 }, 00:25:03.801 "base_bdevs_list": [ 00:25:03.801 { 00:25:03.801 "name": "spare", 00:25:03.801 "uuid": "fc11bddf-0f0e-5a58-a2db-81d9731c7a22", 00:25:03.801 "is_configured": true, 00:25:03.801 "data_offset": 2048, 00:25:03.801 "data_size": 63488 00:25:03.801 }, 00:25:03.801 { 00:25:03.801 "name": null, 00:25:03.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.801 "is_configured": false, 00:25:03.801 "data_offset": 2048, 00:25:03.801 "data_size": 63488 00:25:03.801 }, 00:25:03.801 { 00:25:03.801 "name": "BaseBdev3", 00:25:03.801 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:03.801 "is_configured": true, 00:25:03.801 "data_offset": 2048, 00:25:03.801 "data_size": 63488 00:25:03.801 }, 00:25:03.801 { 00:25:03.801 "name": "BaseBdev4", 00:25:03.801 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:03.801 "is_configured": true, 00:25:03.801 "data_offset": 2048, 00:25:03.801 "data_size": 63488 00:25:03.801 } 00:25:03.801 ] 00:25:03.801 }' 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.801 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.062 [2024-11-08 17:12:40.543791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:04.062 [2024-11-08 17:12:40.549516] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:04.062 [2024-11-08 17:12:40.549745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.062 [2024-11-08 17:12:40.549805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:04.062 [2024-11-08 17:12:40.549815] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.062 "name": "raid_bdev1", 00:25:04.062 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:04.062 "strip_size_kb": 0, 00:25:04.062 "state": "online", 00:25:04.062 "raid_level": "raid1", 00:25:04.062 "superblock": true, 00:25:04.062 "num_base_bdevs": 4, 00:25:04.062 "num_base_bdevs_discovered": 2, 00:25:04.062 "num_base_bdevs_operational": 2, 00:25:04.062 "base_bdevs_list": [ 00:25:04.062 { 00:25:04.062 "name": null, 00:25:04.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.062 "is_configured": false, 00:25:04.062 "data_offset": 0, 00:25:04.062 "data_size": 63488 00:25:04.062 }, 00:25:04.062 { 00:25:04.062 "name": null, 00:25:04.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.062 "is_configured": false, 00:25:04.062 "data_offset": 2048, 00:25:04.062 "data_size": 63488 00:25:04.062 }, 00:25:04.062 { 00:25:04.062 "name": "BaseBdev3", 00:25:04.062 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:04.062 "is_configured": true, 00:25:04.062 "data_offset": 2048, 00:25:04.062 "data_size": 63488 00:25:04.062 }, 00:25:04.062 { 00:25:04.062 "name": "BaseBdev4", 00:25:04.062 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:04.062 "is_configured": true, 00:25:04.062 "data_offset": 2048, 00:25:04.062 "data_size": 63488 00:25:04.062 } 00:25:04.062 ] 00:25:04.062 }' 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.062 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:04.323 "name": "raid_bdev1", 00:25:04.323 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:04.323 "strip_size_kb": 0, 00:25:04.323 "state": "online", 00:25:04.323 "raid_level": "raid1", 00:25:04.323 "superblock": true, 00:25:04.323 "num_base_bdevs": 4, 00:25:04.323 "num_base_bdevs_discovered": 2, 00:25:04.323 "num_base_bdevs_operational": 2, 00:25:04.323 "base_bdevs_list": [ 00:25:04.323 { 00:25:04.323 "name": null, 00:25:04.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.323 "is_configured": false, 00:25:04.323 "data_offset": 0, 00:25:04.323 "data_size": 63488 00:25:04.323 }, 00:25:04.323 { 00:25:04.323 "name": null, 00:25:04.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.323 "is_configured": false, 00:25:04.323 "data_offset": 2048, 00:25:04.323 "data_size": 63488 00:25:04.323 }, 00:25:04.323 { 00:25:04.323 "name": "BaseBdev3", 00:25:04.323 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:04.323 "is_configured": true, 00:25:04.323 "data_offset": 2048, 00:25:04.323 "data_size": 63488 00:25:04.323 }, 00:25:04.323 { 00:25:04.323 "name": "BaseBdev4", 00:25:04.323 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:04.323 "is_configured": true, 00:25:04.323 "data_offset": 2048, 00:25:04.323 "data_size": 63488 00:25:04.323 } 00:25:04.323 ] 00:25:04.323 }' 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.323 [2024-11-08 17:12:40.981920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:04.323 [2024-11-08 17:12:40.982019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.323 [2024-11-08 17:12:40.982053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:25:04.323 [2024-11-08 17:12:40.982065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.323 [2024-11-08 17:12:40.982696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.323 [2024-11-08 17:12:40.982733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:04.323 [2024-11-08 17:12:40.982880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:04.323 [2024-11-08 17:12:40.982900] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:04.323 [2024-11-08 17:12:40.982915] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:04.323 [2024-11-08 17:12:40.982930] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:04.323 BaseBdev1 00:25:04.323 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.324 17:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.705 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.706 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.706 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.706 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.706 17:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.706 "name": "raid_bdev1", 00:25:05.706 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:05.706 "strip_size_kb": 0, 00:25:05.706 "state": "online", 00:25:05.706 "raid_level": "raid1", 00:25:05.706 "superblock": true, 00:25:05.706 "num_base_bdevs": 4, 00:25:05.706 "num_base_bdevs_discovered": 2, 00:25:05.706 "num_base_bdevs_operational": 2, 00:25:05.706 "base_bdevs_list": [ 00:25:05.706 { 00:25:05.706 "name": null, 00:25:05.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.706 "is_configured": false, 00:25:05.706 "data_offset": 0, 00:25:05.706 "data_size": 63488 00:25:05.706 }, 00:25:05.706 { 00:25:05.706 "name": null, 00:25:05.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.706 "is_configured": false, 00:25:05.706 "data_offset": 2048, 00:25:05.706 "data_size": 63488 00:25:05.706 }, 00:25:05.706 { 00:25:05.706 "name": "BaseBdev3", 00:25:05.706 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:05.706 "is_configured": true, 00:25:05.706 "data_offset": 2048, 00:25:05.706 "data_size": 63488 00:25:05.706 }, 00:25:05.706 { 00:25:05.706 "name": "BaseBdev4", 00:25:05.706 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:05.706 "is_configured": true, 00:25:05.706 "data_offset": 2048, 00:25:05.706 "data_size": 63488 00:25:05.706 } 00:25:05.706 ] 00:25:05.706 }' 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:05.706 "name": "raid_bdev1", 00:25:05.706 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:05.706 "strip_size_kb": 0, 00:25:05.706 "state": "online", 00:25:05.706 "raid_level": "raid1", 00:25:05.706 "superblock": true, 00:25:05.706 "num_base_bdevs": 4, 00:25:05.706 "num_base_bdevs_discovered": 2, 00:25:05.706 "num_base_bdevs_operational": 2, 00:25:05.706 "base_bdevs_list": [ 00:25:05.706 { 00:25:05.706 "name": null, 00:25:05.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.706 "is_configured": false, 00:25:05.706 "data_offset": 0, 00:25:05.706 "data_size": 63488 00:25:05.706 }, 00:25:05.706 { 00:25:05.706 "name": null, 00:25:05.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.706 "is_configured": false, 00:25:05.706 "data_offset": 2048, 00:25:05.706 "data_size": 63488 00:25:05.706 }, 00:25:05.706 { 00:25:05.706 "name": "BaseBdev3", 00:25:05.706 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:05.706 "is_configured": true, 00:25:05.706 "data_offset": 2048, 00:25:05.706 "data_size": 63488 00:25:05.706 }, 00:25:05.706 { 00:25:05.706 "name": "BaseBdev4", 00:25:05.706 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:05.706 "is_configured": true, 00:25:05.706 "data_offset": 2048, 00:25:05.706 "data_size": 63488 00:25:05.706 } 00:25:05.706 ] 00:25:05.706 }' 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.706 [2024-11-08 17:12:42.406626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.706 [2024-11-08 17:12:42.406918] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:05.706 [2024-11-08 17:12:42.406942] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:05.706 request: 00:25:05.706 { 00:25:05.706 "base_bdev": "BaseBdev1", 00:25:05.706 "raid_bdev": "raid_bdev1", 00:25:05.706 "method": "bdev_raid_add_base_bdev", 00:25:05.706 "req_id": 1 00:25:05.706 } 00:25:05.706 Got JSON-RPC error response 00:25:05.706 response: 00:25:05.706 { 00:25:05.706 "code": -22, 00:25:05.706 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:05.706 } 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:05.706 17:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.093 "name": "raid_bdev1", 00:25:07.093 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:07.093 "strip_size_kb": 0, 00:25:07.093 "state": "online", 00:25:07.093 "raid_level": "raid1", 00:25:07.093 "superblock": true, 00:25:07.093 "num_base_bdevs": 4, 00:25:07.093 "num_base_bdevs_discovered": 2, 00:25:07.093 "num_base_bdevs_operational": 2, 00:25:07.093 "base_bdevs_list": [ 00:25:07.093 { 00:25:07.093 "name": null, 00:25:07.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.093 "is_configured": false, 00:25:07.093 "data_offset": 0, 00:25:07.093 "data_size": 63488 00:25:07.093 }, 00:25:07.093 { 00:25:07.093 "name": null, 00:25:07.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.093 "is_configured": false, 00:25:07.093 "data_offset": 2048, 00:25:07.093 "data_size": 63488 00:25:07.093 }, 00:25:07.093 { 00:25:07.093 "name": "BaseBdev3", 00:25:07.093 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:07.093 "is_configured": true, 00:25:07.093 "data_offset": 2048, 00:25:07.093 "data_size": 63488 00:25:07.093 }, 00:25:07.093 { 00:25:07.093 "name": "BaseBdev4", 00:25:07.093 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:07.093 "is_configured": true, 00:25:07.093 "data_offset": 2048, 00:25:07.093 "data_size": 63488 00:25:07.093 } 00:25:07.093 ] 00:25:07.093 }' 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.093 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:07.093 "name": "raid_bdev1", 00:25:07.093 "uuid": "a30e7e5e-063a-4cde-ade6-66966c48c1a4", 00:25:07.093 "strip_size_kb": 0, 00:25:07.093 "state": "online", 00:25:07.093 "raid_level": "raid1", 00:25:07.093 "superblock": true, 00:25:07.093 "num_base_bdevs": 4, 00:25:07.093 "num_base_bdevs_discovered": 2, 00:25:07.093 "num_base_bdevs_operational": 2, 00:25:07.093 "base_bdevs_list": [ 00:25:07.093 { 00:25:07.093 "name": null, 00:25:07.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.093 "is_configured": false, 00:25:07.093 "data_offset": 0, 00:25:07.093 "data_size": 63488 00:25:07.093 }, 00:25:07.093 { 00:25:07.093 "name": null, 00:25:07.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.093 "is_configured": false, 00:25:07.093 "data_offset": 2048, 00:25:07.093 "data_size": 63488 00:25:07.093 }, 00:25:07.093 { 00:25:07.093 "name": "BaseBdev3", 00:25:07.093 "uuid": "5a058338-0a61-50d0-b501-7b3e5a7cddbf", 00:25:07.093 "is_configured": true, 00:25:07.094 "data_offset": 2048, 00:25:07.094 "data_size": 63488 00:25:07.094 }, 00:25:07.094 { 00:25:07.094 "name": "BaseBdev4", 00:25:07.094 "uuid": "b2fdd59c-8d84-5087-8cf7-7cc830b9e70b", 00:25:07.094 "is_configured": true, 00:25:07.094 "data_offset": 2048, 00:25:07.094 "data_size": 63488 00:25:07.094 } 00:25:07.094 ] 00:25:07.094 }' 00:25:07.094 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:07.094 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:07.094 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77657 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77657 ']' 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77657 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77657 00:25:07.354 killing process with pid 77657 00:25:07.354 Received shutdown signal, test time was about 16.219954 seconds 00:25:07.354 00:25:07.354 Latency(us) 00:25:07.354 [2024-11-08T17:12:44.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.354 [2024-11-08T17:12:44.069Z] =================================================================================================================== 00:25:07.354 [2024-11-08T17:12:44.069Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77657' 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77657 00:25:07.354 17:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77657 00:25:07.354 [2024-11-08 17:12:43.850713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:07.354 [2024-11-08 17:12:43.851006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:07.354 [2024-11-08 17:12:43.851152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:07.354 [2024-11-08 17:12:43.851170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:07.614 [2024-11-08 17:12:44.171841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:08.554 17:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:25:08.554 00:25:08.554 real 0m19.113s 00:25:08.554 user 0m24.073s 00:25:08.554 sys 0m1.991s 00:25:08.554 17:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:08.554 17:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.554 ************************************ 00:25:08.554 END TEST raid_rebuild_test_sb_io 00:25:08.554 ************************************ 00:25:08.555 17:12:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:25:08.555 17:12:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:25:08.555 17:12:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:08.555 17:12:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:08.555 17:12:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:08.555 ************************************ 00:25:08.555 START TEST raid5f_state_function_test 00:25:08.555 ************************************ 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78362 00:25:08.555 Process raid pid: 78362 00:25:08.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78362' 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78362 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 78362 ']' 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:08.555 17:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.817 [2024-11-08 17:12:45.274172] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:25:08.817 [2024-11-08 17:12:45.274357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.817 [2024-11-08 17:12:45.442871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.078 [2024-11-08 17:12:45.607716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.338 [2024-11-08 17:12:45.794032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:09.338 [2024-11-08 17:12:45.794089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:09.596 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.597 [2024-11-08 17:12:46.247557] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:09.597 [2024-11-08 17:12:46.247621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:09.597 [2024-11-08 17:12:46.247639] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:09.597 [2024-11-08 17:12:46.247650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:09.597 [2024-11-08 17:12:46.247656] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:09.597 [2024-11-08 17:12:46.247665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.597 "name": "Existed_Raid", 00:25:09.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.597 "strip_size_kb": 64, 00:25:09.597 "state": "configuring", 00:25:09.597 "raid_level": "raid5f", 00:25:09.597 "superblock": false, 00:25:09.597 "num_base_bdevs": 3, 00:25:09.597 "num_base_bdevs_discovered": 0, 00:25:09.597 "num_base_bdevs_operational": 3, 00:25:09.597 "base_bdevs_list": [ 00:25:09.597 { 00:25:09.597 "name": "BaseBdev1", 00:25:09.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.597 "is_configured": false, 00:25:09.597 "data_offset": 0, 00:25:09.597 "data_size": 0 00:25:09.597 }, 00:25:09.597 { 00:25:09.597 "name": "BaseBdev2", 00:25:09.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.597 "is_configured": false, 00:25:09.597 "data_offset": 0, 00:25:09.597 "data_size": 0 00:25:09.597 }, 00:25:09.597 { 00:25:09.597 "name": "BaseBdev3", 00:25:09.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.597 "is_configured": false, 00:25:09.597 "data_offset": 0, 00:25:09.597 "data_size": 0 00:25:09.597 } 00:25:09.597 ] 00:25:09.597 }' 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.597 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.162 [2024-11-08 17:12:46.587581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:10.162 [2024-11-08 17:12:46.587620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.162 [2024-11-08 17:12:46.595572] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:10.162 [2024-11-08 17:12:46.595615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:10.162 [2024-11-08 17:12:46.595625] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:10.162 [2024-11-08 17:12:46.595635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:10.162 [2024-11-08 17:12:46.595642] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:10.162 [2024-11-08 17:12:46.595653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.162 [2024-11-08 17:12:46.630567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:10.162 BaseBdev1 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.162 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.163 [ 00:25:10.163 { 00:25:10.163 "name": "BaseBdev1", 00:25:10.163 "aliases": [ 00:25:10.163 "352818eb-10c7-494b-88f2-f5b247ef7489" 00:25:10.163 ], 00:25:10.163 "product_name": "Malloc disk", 00:25:10.163 "block_size": 512, 00:25:10.163 "num_blocks": 65536, 00:25:10.163 "uuid": "352818eb-10c7-494b-88f2-f5b247ef7489", 00:25:10.163 "assigned_rate_limits": { 00:25:10.163 "rw_ios_per_sec": 0, 00:25:10.163 "rw_mbytes_per_sec": 0, 00:25:10.163 "r_mbytes_per_sec": 0, 00:25:10.163 "w_mbytes_per_sec": 0 00:25:10.163 }, 00:25:10.163 "claimed": true, 00:25:10.163 "claim_type": "exclusive_write", 00:25:10.163 "zoned": false, 00:25:10.163 "supported_io_types": { 00:25:10.163 "read": true, 00:25:10.163 "write": true, 00:25:10.163 "unmap": true, 00:25:10.163 "flush": true, 00:25:10.163 "reset": true, 00:25:10.163 "nvme_admin": false, 00:25:10.163 "nvme_io": false, 00:25:10.163 "nvme_io_md": false, 00:25:10.163 "write_zeroes": true, 00:25:10.163 "zcopy": true, 00:25:10.163 "get_zone_info": false, 00:25:10.163 "zone_management": false, 00:25:10.163 "zone_append": false, 00:25:10.163 "compare": false, 00:25:10.163 "compare_and_write": false, 00:25:10.163 "abort": true, 00:25:10.163 "seek_hole": false, 00:25:10.163 "seek_data": false, 00:25:10.163 "copy": true, 00:25:10.163 "nvme_iov_md": false 00:25:10.163 }, 00:25:10.163 "memory_domains": [ 00:25:10.163 { 00:25:10.163 "dma_device_id": "system", 00:25:10.163 "dma_device_type": 1 00:25:10.163 }, 00:25:10.163 { 00:25:10.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.163 "dma_device_type": 2 00:25:10.163 } 00:25:10.163 ], 00:25:10.163 "driver_specific": {} 00:25:10.163 } 00:25:10.163 ] 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.163 "name": "Existed_Raid", 00:25:10.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.163 "strip_size_kb": 64, 00:25:10.163 "state": "configuring", 00:25:10.163 "raid_level": "raid5f", 00:25:10.163 "superblock": false, 00:25:10.163 "num_base_bdevs": 3, 00:25:10.163 "num_base_bdevs_discovered": 1, 00:25:10.163 "num_base_bdevs_operational": 3, 00:25:10.163 "base_bdevs_list": [ 00:25:10.163 { 00:25:10.163 "name": "BaseBdev1", 00:25:10.163 "uuid": "352818eb-10c7-494b-88f2-f5b247ef7489", 00:25:10.163 "is_configured": true, 00:25:10.163 "data_offset": 0, 00:25:10.163 "data_size": 65536 00:25:10.163 }, 00:25:10.163 { 00:25:10.163 "name": "BaseBdev2", 00:25:10.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.163 "is_configured": false, 00:25:10.163 "data_offset": 0, 00:25:10.163 "data_size": 0 00:25:10.163 }, 00:25:10.163 { 00:25:10.163 "name": "BaseBdev3", 00:25:10.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.163 "is_configured": false, 00:25:10.163 "data_offset": 0, 00:25:10.163 "data_size": 0 00:25:10.163 } 00:25:10.163 ] 00:25:10.163 }' 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.163 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.459 [2024-11-08 17:12:46.974698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:10.459 [2024-11-08 17:12:46.974779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.459 [2024-11-08 17:12:46.986780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:10.459 [2024-11-08 17:12:46.988860] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:10.459 [2024-11-08 17:12:46.988983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:10.459 [2024-11-08 17:12:46.989040] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:10.459 [2024-11-08 17:12:46.989067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.459 17:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.459 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.459 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.459 "name": "Existed_Raid", 00:25:10.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.459 "strip_size_kb": 64, 00:25:10.459 "state": "configuring", 00:25:10.459 "raid_level": "raid5f", 00:25:10.459 "superblock": false, 00:25:10.459 "num_base_bdevs": 3, 00:25:10.459 "num_base_bdevs_discovered": 1, 00:25:10.459 "num_base_bdevs_operational": 3, 00:25:10.459 "base_bdevs_list": [ 00:25:10.459 { 00:25:10.459 "name": "BaseBdev1", 00:25:10.459 "uuid": "352818eb-10c7-494b-88f2-f5b247ef7489", 00:25:10.459 "is_configured": true, 00:25:10.459 "data_offset": 0, 00:25:10.459 "data_size": 65536 00:25:10.459 }, 00:25:10.459 { 00:25:10.459 "name": "BaseBdev2", 00:25:10.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.459 "is_configured": false, 00:25:10.459 "data_offset": 0, 00:25:10.459 "data_size": 0 00:25:10.459 }, 00:25:10.459 { 00:25:10.459 "name": "BaseBdev3", 00:25:10.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.459 "is_configured": false, 00:25:10.459 "data_offset": 0, 00:25:10.459 "data_size": 0 00:25:10.459 } 00:25:10.459 ] 00:25:10.459 }' 00:25:10.459 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.459 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.717 [2024-11-08 17:12:47.331404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:10.717 BaseBdev2 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.717 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.718 [ 00:25:10.718 { 00:25:10.718 "name": "BaseBdev2", 00:25:10.718 "aliases": [ 00:25:10.718 "5fdc0494-a047-4444-94e2-85a8282c5165" 00:25:10.718 ], 00:25:10.718 "product_name": "Malloc disk", 00:25:10.718 "block_size": 512, 00:25:10.718 "num_blocks": 65536, 00:25:10.718 "uuid": "5fdc0494-a047-4444-94e2-85a8282c5165", 00:25:10.718 "assigned_rate_limits": { 00:25:10.718 "rw_ios_per_sec": 0, 00:25:10.718 "rw_mbytes_per_sec": 0, 00:25:10.718 "r_mbytes_per_sec": 0, 00:25:10.718 "w_mbytes_per_sec": 0 00:25:10.718 }, 00:25:10.718 "claimed": true, 00:25:10.718 "claim_type": "exclusive_write", 00:25:10.718 "zoned": false, 00:25:10.718 "supported_io_types": { 00:25:10.718 "read": true, 00:25:10.718 "write": true, 00:25:10.718 "unmap": true, 00:25:10.718 "flush": true, 00:25:10.718 "reset": true, 00:25:10.718 "nvme_admin": false, 00:25:10.718 "nvme_io": false, 00:25:10.718 "nvme_io_md": false, 00:25:10.718 "write_zeroes": true, 00:25:10.718 "zcopy": true, 00:25:10.718 "get_zone_info": false, 00:25:10.718 "zone_management": false, 00:25:10.718 "zone_append": false, 00:25:10.718 "compare": false, 00:25:10.718 "compare_and_write": false, 00:25:10.718 "abort": true, 00:25:10.718 "seek_hole": false, 00:25:10.718 "seek_data": false, 00:25:10.718 "copy": true, 00:25:10.718 "nvme_iov_md": false 00:25:10.718 }, 00:25:10.718 "memory_domains": [ 00:25:10.718 { 00:25:10.718 "dma_device_id": "system", 00:25:10.718 "dma_device_type": 1 00:25:10.718 }, 00:25:10.718 { 00:25:10.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.718 "dma_device_type": 2 00:25:10.718 } 00:25:10.718 ], 00:25:10.718 "driver_specific": {} 00:25:10.718 } 00:25:10.718 ] 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.718 "name": "Existed_Raid", 00:25:10.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.718 "strip_size_kb": 64, 00:25:10.718 "state": "configuring", 00:25:10.718 "raid_level": "raid5f", 00:25:10.718 "superblock": false, 00:25:10.718 "num_base_bdevs": 3, 00:25:10.718 "num_base_bdevs_discovered": 2, 00:25:10.718 "num_base_bdevs_operational": 3, 00:25:10.718 "base_bdevs_list": [ 00:25:10.718 { 00:25:10.718 "name": "BaseBdev1", 00:25:10.718 "uuid": "352818eb-10c7-494b-88f2-f5b247ef7489", 00:25:10.718 "is_configured": true, 00:25:10.718 "data_offset": 0, 00:25:10.718 "data_size": 65536 00:25:10.718 }, 00:25:10.718 { 00:25:10.718 "name": "BaseBdev2", 00:25:10.718 "uuid": "5fdc0494-a047-4444-94e2-85a8282c5165", 00:25:10.718 "is_configured": true, 00:25:10.718 "data_offset": 0, 00:25:10.718 "data_size": 65536 00:25:10.718 }, 00:25:10.718 { 00:25:10.718 "name": "BaseBdev3", 00:25:10.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.718 "is_configured": false, 00:25:10.718 "data_offset": 0, 00:25:10.718 "data_size": 0 00:25:10.718 } 00:25:10.718 ] 00:25:10.718 }' 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.718 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.975 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:10.975 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.975 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.235 [2024-11-08 17:12:47.725848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:11.236 [2024-11-08 17:12:47.726050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:11.236 [2024-11-08 17:12:47.726089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:11.236 [2024-11-08 17:12:47.726730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:11.236 [2024-11-08 17:12:47.730698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:11.236 [2024-11-08 17:12:47.730804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:11.236 [2024-11-08 17:12:47.731099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.236 BaseBdev3 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 [ 00:25:11.236 { 00:25:11.236 "name": "BaseBdev3", 00:25:11.236 "aliases": [ 00:25:11.236 "e98eb3e9-8cf2-4025-9af0-55abeab6bea1" 00:25:11.236 ], 00:25:11.236 "product_name": "Malloc disk", 00:25:11.236 "block_size": 512, 00:25:11.236 "num_blocks": 65536, 00:25:11.236 "uuid": "e98eb3e9-8cf2-4025-9af0-55abeab6bea1", 00:25:11.236 "assigned_rate_limits": { 00:25:11.236 "rw_ios_per_sec": 0, 00:25:11.236 "rw_mbytes_per_sec": 0, 00:25:11.236 "r_mbytes_per_sec": 0, 00:25:11.236 "w_mbytes_per_sec": 0 00:25:11.236 }, 00:25:11.236 "claimed": true, 00:25:11.236 "claim_type": "exclusive_write", 00:25:11.236 "zoned": false, 00:25:11.236 "supported_io_types": { 00:25:11.236 "read": true, 00:25:11.236 "write": true, 00:25:11.236 "unmap": true, 00:25:11.236 "flush": true, 00:25:11.236 "reset": true, 00:25:11.236 "nvme_admin": false, 00:25:11.236 "nvme_io": false, 00:25:11.236 "nvme_io_md": false, 00:25:11.236 "write_zeroes": true, 00:25:11.236 "zcopy": true, 00:25:11.236 "get_zone_info": false, 00:25:11.236 "zone_management": false, 00:25:11.236 "zone_append": false, 00:25:11.236 "compare": false, 00:25:11.236 "compare_and_write": false, 00:25:11.236 "abort": true, 00:25:11.236 "seek_hole": false, 00:25:11.236 "seek_data": false, 00:25:11.236 "copy": true, 00:25:11.236 "nvme_iov_md": false 00:25:11.236 }, 00:25:11.236 "memory_domains": [ 00:25:11.236 { 00:25:11.236 "dma_device_id": "system", 00:25:11.236 "dma_device_type": 1 00:25:11.236 }, 00:25:11.236 { 00:25:11.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.236 "dma_device_type": 2 00:25:11.236 } 00:25:11.236 ], 00:25:11.236 "driver_specific": {} 00:25:11.236 } 00:25:11.236 ] 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.236 "name": "Existed_Raid", 00:25:11.236 "uuid": "afa0a56d-80f4-466c-bcdb-acfb7e90245a", 00:25:11.236 "strip_size_kb": 64, 00:25:11.236 "state": "online", 00:25:11.236 "raid_level": "raid5f", 00:25:11.236 "superblock": false, 00:25:11.236 "num_base_bdevs": 3, 00:25:11.236 "num_base_bdevs_discovered": 3, 00:25:11.236 "num_base_bdevs_operational": 3, 00:25:11.236 "base_bdevs_list": [ 00:25:11.236 { 00:25:11.236 "name": "BaseBdev1", 00:25:11.236 "uuid": "352818eb-10c7-494b-88f2-f5b247ef7489", 00:25:11.236 "is_configured": true, 00:25:11.236 "data_offset": 0, 00:25:11.236 "data_size": 65536 00:25:11.236 }, 00:25:11.236 { 00:25:11.236 "name": "BaseBdev2", 00:25:11.236 "uuid": "5fdc0494-a047-4444-94e2-85a8282c5165", 00:25:11.236 "is_configured": true, 00:25:11.236 "data_offset": 0, 00:25:11.236 "data_size": 65536 00:25:11.236 }, 00:25:11.236 { 00:25:11.236 "name": "BaseBdev3", 00:25:11.236 "uuid": "e98eb3e9-8cf2-4025-9af0-55abeab6bea1", 00:25:11.236 "is_configured": true, 00:25:11.236 "data_offset": 0, 00:25:11.236 "data_size": 65536 00:25:11.236 } 00:25:11.236 ] 00:25:11.236 }' 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.236 17:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.495 [2024-11-08 17:12:48.083631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:11.495 "name": "Existed_Raid", 00:25:11.495 "aliases": [ 00:25:11.495 "afa0a56d-80f4-466c-bcdb-acfb7e90245a" 00:25:11.495 ], 00:25:11.495 "product_name": "Raid Volume", 00:25:11.495 "block_size": 512, 00:25:11.495 "num_blocks": 131072, 00:25:11.495 "uuid": "afa0a56d-80f4-466c-bcdb-acfb7e90245a", 00:25:11.495 "assigned_rate_limits": { 00:25:11.495 "rw_ios_per_sec": 0, 00:25:11.495 "rw_mbytes_per_sec": 0, 00:25:11.495 "r_mbytes_per_sec": 0, 00:25:11.495 "w_mbytes_per_sec": 0 00:25:11.495 }, 00:25:11.495 "claimed": false, 00:25:11.495 "zoned": false, 00:25:11.495 "supported_io_types": { 00:25:11.495 "read": true, 00:25:11.495 "write": true, 00:25:11.495 "unmap": false, 00:25:11.495 "flush": false, 00:25:11.495 "reset": true, 00:25:11.495 "nvme_admin": false, 00:25:11.495 "nvme_io": false, 00:25:11.495 "nvme_io_md": false, 00:25:11.495 "write_zeroes": true, 00:25:11.495 "zcopy": false, 00:25:11.495 "get_zone_info": false, 00:25:11.495 "zone_management": false, 00:25:11.495 "zone_append": false, 00:25:11.495 "compare": false, 00:25:11.495 "compare_and_write": false, 00:25:11.495 "abort": false, 00:25:11.495 "seek_hole": false, 00:25:11.495 "seek_data": false, 00:25:11.495 "copy": false, 00:25:11.495 "nvme_iov_md": false 00:25:11.495 }, 00:25:11.495 "driver_specific": { 00:25:11.495 "raid": { 00:25:11.495 "uuid": "afa0a56d-80f4-466c-bcdb-acfb7e90245a", 00:25:11.495 "strip_size_kb": 64, 00:25:11.495 "state": "online", 00:25:11.495 "raid_level": "raid5f", 00:25:11.495 "superblock": false, 00:25:11.495 "num_base_bdevs": 3, 00:25:11.495 "num_base_bdevs_discovered": 3, 00:25:11.495 "num_base_bdevs_operational": 3, 00:25:11.495 "base_bdevs_list": [ 00:25:11.495 { 00:25:11.495 "name": "BaseBdev1", 00:25:11.495 "uuid": "352818eb-10c7-494b-88f2-f5b247ef7489", 00:25:11.495 "is_configured": true, 00:25:11.495 "data_offset": 0, 00:25:11.495 "data_size": 65536 00:25:11.495 }, 00:25:11.495 { 00:25:11.495 "name": "BaseBdev2", 00:25:11.495 "uuid": "5fdc0494-a047-4444-94e2-85a8282c5165", 00:25:11.495 "is_configured": true, 00:25:11.495 "data_offset": 0, 00:25:11.495 "data_size": 65536 00:25:11.495 }, 00:25:11.495 { 00:25:11.495 "name": "BaseBdev3", 00:25:11.495 "uuid": "e98eb3e9-8cf2-4025-9af0-55abeab6bea1", 00:25:11.495 "is_configured": true, 00:25:11.495 "data_offset": 0, 00:25:11.495 "data_size": 65536 00:25:11.495 } 00:25:11.495 ] 00:25:11.495 } 00:25:11.495 } 00:25:11.495 }' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:11.495 BaseBdev2 00:25:11.495 BaseBdev3' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.495 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.755 [2024-11-08 17:12:48.271491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.755 "name": "Existed_Raid", 00:25:11.755 "uuid": "afa0a56d-80f4-466c-bcdb-acfb7e90245a", 00:25:11.755 "strip_size_kb": 64, 00:25:11.755 "state": "online", 00:25:11.755 "raid_level": "raid5f", 00:25:11.755 "superblock": false, 00:25:11.755 "num_base_bdevs": 3, 00:25:11.755 "num_base_bdevs_discovered": 2, 00:25:11.755 "num_base_bdevs_operational": 2, 00:25:11.755 "base_bdevs_list": [ 00:25:11.755 { 00:25:11.755 "name": null, 00:25:11.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.755 "is_configured": false, 00:25:11.755 "data_offset": 0, 00:25:11.755 "data_size": 65536 00:25:11.755 }, 00:25:11.755 { 00:25:11.755 "name": "BaseBdev2", 00:25:11.755 "uuid": "5fdc0494-a047-4444-94e2-85a8282c5165", 00:25:11.755 "is_configured": true, 00:25:11.755 "data_offset": 0, 00:25:11.755 "data_size": 65536 00:25:11.755 }, 00:25:11.755 { 00:25:11.755 "name": "BaseBdev3", 00:25:11.755 "uuid": "e98eb3e9-8cf2-4025-9af0-55abeab6bea1", 00:25:11.755 "is_configured": true, 00:25:11.755 "data_offset": 0, 00:25:11.755 "data_size": 65536 00:25:11.755 } 00:25:11.755 ] 00:25:11.755 }' 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.755 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.016 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.016 [2024-11-08 17:12:48.686867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:12.016 [2024-11-08 17:12:48.686976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:12.276 [2024-11-08 17:12:48.748295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 [2024-11-08 17:12:48.788344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:12.276 [2024-11-08 17:12:48.788396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 BaseBdev2 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.276 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.276 [ 00:25:12.276 { 00:25:12.276 "name": "BaseBdev2", 00:25:12.276 "aliases": [ 00:25:12.276 "3cdf399e-b65c-4881-85de-8b802dbfb10a" 00:25:12.276 ], 00:25:12.276 "product_name": "Malloc disk", 00:25:12.276 "block_size": 512, 00:25:12.276 "num_blocks": 65536, 00:25:12.276 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:12.276 "assigned_rate_limits": { 00:25:12.276 "rw_ios_per_sec": 0, 00:25:12.276 "rw_mbytes_per_sec": 0, 00:25:12.276 "r_mbytes_per_sec": 0, 00:25:12.276 "w_mbytes_per_sec": 0 00:25:12.276 }, 00:25:12.276 "claimed": false, 00:25:12.276 "zoned": false, 00:25:12.276 "supported_io_types": { 00:25:12.276 "read": true, 00:25:12.276 "write": true, 00:25:12.276 "unmap": true, 00:25:12.276 "flush": true, 00:25:12.276 "reset": true, 00:25:12.276 "nvme_admin": false, 00:25:12.276 "nvme_io": false, 00:25:12.276 "nvme_io_md": false, 00:25:12.276 "write_zeroes": true, 00:25:12.276 "zcopy": true, 00:25:12.276 "get_zone_info": false, 00:25:12.276 "zone_management": false, 00:25:12.276 "zone_append": false, 00:25:12.276 "compare": false, 00:25:12.276 "compare_and_write": false, 00:25:12.276 "abort": true, 00:25:12.276 "seek_hole": false, 00:25:12.276 "seek_data": false, 00:25:12.276 "copy": true, 00:25:12.276 "nvme_iov_md": false 00:25:12.276 }, 00:25:12.277 "memory_domains": [ 00:25:12.277 { 00:25:12.277 "dma_device_id": "system", 00:25:12.277 "dma_device_type": 1 00:25:12.277 }, 00:25:12.277 { 00:25:12.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.277 "dma_device_type": 2 00:25:12.277 } 00:25:12.277 ], 00:25:12.277 "driver_specific": {} 00:25:12.277 } 00:25:12.277 ] 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.277 BaseBdev3 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.277 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.538 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.538 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:12.538 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.538 17:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.538 [ 00:25:12.538 { 00:25:12.538 "name": "BaseBdev3", 00:25:12.538 "aliases": [ 00:25:12.538 "e80d2c63-4b10-469b-ae1d-ec39a722d46c" 00:25:12.538 ], 00:25:12.538 "product_name": "Malloc disk", 00:25:12.538 "block_size": 512, 00:25:12.538 "num_blocks": 65536, 00:25:12.538 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:12.538 "assigned_rate_limits": { 00:25:12.538 "rw_ios_per_sec": 0, 00:25:12.538 "rw_mbytes_per_sec": 0, 00:25:12.538 "r_mbytes_per_sec": 0, 00:25:12.538 "w_mbytes_per_sec": 0 00:25:12.538 }, 00:25:12.538 "claimed": false, 00:25:12.538 "zoned": false, 00:25:12.538 "supported_io_types": { 00:25:12.538 "read": true, 00:25:12.538 "write": true, 00:25:12.538 "unmap": true, 00:25:12.538 "flush": true, 00:25:12.538 "reset": true, 00:25:12.538 "nvme_admin": false, 00:25:12.538 "nvme_io": false, 00:25:12.538 "nvme_io_md": false, 00:25:12.538 "write_zeroes": true, 00:25:12.538 "zcopy": true, 00:25:12.538 "get_zone_info": false, 00:25:12.538 "zone_management": false, 00:25:12.538 "zone_append": false, 00:25:12.538 "compare": false, 00:25:12.538 "compare_and_write": false, 00:25:12.538 "abort": true, 00:25:12.538 "seek_hole": false, 00:25:12.538 "seek_data": false, 00:25:12.538 "copy": true, 00:25:12.538 "nvme_iov_md": false 00:25:12.538 }, 00:25:12.538 "memory_domains": [ 00:25:12.538 { 00:25:12.538 "dma_device_id": "system", 00:25:12.538 "dma_device_type": 1 00:25:12.538 }, 00:25:12.538 { 00:25:12.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.538 "dma_device_type": 2 00:25:12.538 } 00:25:12.538 ], 00:25:12.538 "driver_specific": {} 00:25:12.538 } 00:25:12.538 ] 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.538 [2024-11-08 17:12:49.012059] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:12.538 [2024-11-08 17:12:49.012202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:12.538 [2024-11-08 17:12:49.012275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:12.538 [2024-11-08 17:12:49.014219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.538 "name": "Existed_Raid", 00:25:12.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.538 "strip_size_kb": 64, 00:25:12.538 "state": "configuring", 00:25:12.538 "raid_level": "raid5f", 00:25:12.538 "superblock": false, 00:25:12.538 "num_base_bdevs": 3, 00:25:12.538 "num_base_bdevs_discovered": 2, 00:25:12.538 "num_base_bdevs_operational": 3, 00:25:12.538 "base_bdevs_list": [ 00:25:12.538 { 00:25:12.538 "name": "BaseBdev1", 00:25:12.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.538 "is_configured": false, 00:25:12.538 "data_offset": 0, 00:25:12.538 "data_size": 0 00:25:12.538 }, 00:25:12.538 { 00:25:12.538 "name": "BaseBdev2", 00:25:12.538 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:12.538 "is_configured": true, 00:25:12.538 "data_offset": 0, 00:25:12.538 "data_size": 65536 00:25:12.538 }, 00:25:12.538 { 00:25:12.538 "name": "BaseBdev3", 00:25:12.538 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:12.538 "is_configured": true, 00:25:12.538 "data_offset": 0, 00:25:12.538 "data_size": 65536 00:25:12.538 } 00:25:12.538 ] 00:25:12.538 }' 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.538 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.798 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.799 [2024-11-08 17:12:49.344160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.799 "name": "Existed_Raid", 00:25:12.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.799 "strip_size_kb": 64, 00:25:12.799 "state": "configuring", 00:25:12.799 "raid_level": "raid5f", 00:25:12.799 "superblock": false, 00:25:12.799 "num_base_bdevs": 3, 00:25:12.799 "num_base_bdevs_discovered": 1, 00:25:12.799 "num_base_bdevs_operational": 3, 00:25:12.799 "base_bdevs_list": [ 00:25:12.799 { 00:25:12.799 "name": "BaseBdev1", 00:25:12.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.799 "is_configured": false, 00:25:12.799 "data_offset": 0, 00:25:12.799 "data_size": 0 00:25:12.799 }, 00:25:12.799 { 00:25:12.799 "name": null, 00:25:12.799 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:12.799 "is_configured": false, 00:25:12.799 "data_offset": 0, 00:25:12.799 "data_size": 65536 00:25:12.799 }, 00:25:12.799 { 00:25:12.799 "name": "BaseBdev3", 00:25:12.799 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:12.799 "is_configured": true, 00:25:12.799 "data_offset": 0, 00:25:12.799 "data_size": 65536 00:25:12.799 } 00:25:12.799 ] 00:25:12.799 }' 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.799 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.057 [2024-11-08 17:12:49.740796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.057 BaseBdev1 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.057 [ 00:25:13.057 { 00:25:13.057 "name": "BaseBdev1", 00:25:13.057 "aliases": [ 00:25:13.057 "c43a777f-34e0-4bdd-81be-8ff7420de0bd" 00:25:13.057 ], 00:25:13.057 "product_name": "Malloc disk", 00:25:13.057 "block_size": 512, 00:25:13.057 "num_blocks": 65536, 00:25:13.057 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:13.057 "assigned_rate_limits": { 00:25:13.057 "rw_ios_per_sec": 0, 00:25:13.057 "rw_mbytes_per_sec": 0, 00:25:13.057 "r_mbytes_per_sec": 0, 00:25:13.057 "w_mbytes_per_sec": 0 00:25:13.057 }, 00:25:13.057 "claimed": true, 00:25:13.057 "claim_type": "exclusive_write", 00:25:13.057 "zoned": false, 00:25:13.057 "supported_io_types": { 00:25:13.057 "read": true, 00:25:13.057 "write": true, 00:25:13.057 "unmap": true, 00:25:13.057 "flush": true, 00:25:13.057 "reset": true, 00:25:13.057 "nvme_admin": false, 00:25:13.057 "nvme_io": false, 00:25:13.057 "nvme_io_md": false, 00:25:13.057 "write_zeroes": true, 00:25:13.057 "zcopy": true, 00:25:13.057 "get_zone_info": false, 00:25:13.057 "zone_management": false, 00:25:13.057 "zone_append": false, 00:25:13.057 "compare": false, 00:25:13.057 "compare_and_write": false, 00:25:13.057 "abort": true, 00:25:13.057 "seek_hole": false, 00:25:13.057 "seek_data": false, 00:25:13.057 "copy": true, 00:25:13.057 "nvme_iov_md": false 00:25:13.057 }, 00:25:13.057 "memory_domains": [ 00:25:13.057 { 00:25:13.057 "dma_device_id": "system", 00:25:13.057 "dma_device_type": 1 00:25:13.057 }, 00:25:13.057 { 00:25:13.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.057 "dma_device_type": 2 00:25:13.057 } 00:25:13.057 ], 00:25:13.057 "driver_specific": {} 00:25:13.057 } 00:25:13.057 ] 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.057 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.317 "name": "Existed_Raid", 00:25:13.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.317 "strip_size_kb": 64, 00:25:13.317 "state": "configuring", 00:25:13.317 "raid_level": "raid5f", 00:25:13.317 "superblock": false, 00:25:13.317 "num_base_bdevs": 3, 00:25:13.317 "num_base_bdevs_discovered": 2, 00:25:13.317 "num_base_bdevs_operational": 3, 00:25:13.317 "base_bdevs_list": [ 00:25:13.317 { 00:25:13.317 "name": "BaseBdev1", 00:25:13.317 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:13.317 "is_configured": true, 00:25:13.317 "data_offset": 0, 00:25:13.317 "data_size": 65536 00:25:13.317 }, 00:25:13.317 { 00:25:13.317 "name": null, 00:25:13.317 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:13.317 "is_configured": false, 00:25:13.317 "data_offset": 0, 00:25:13.317 "data_size": 65536 00:25:13.317 }, 00:25:13.317 { 00:25:13.317 "name": "BaseBdev3", 00:25:13.317 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:13.317 "is_configured": true, 00:25:13.317 "data_offset": 0, 00:25:13.317 "data_size": 65536 00:25:13.317 } 00:25:13.317 ] 00:25:13.317 }' 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.317 17:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.574 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:13.574 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.574 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.575 [2024-11-08 17:12:50.116923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.575 "name": "Existed_Raid", 00:25:13.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.575 "strip_size_kb": 64, 00:25:13.575 "state": "configuring", 00:25:13.575 "raid_level": "raid5f", 00:25:13.575 "superblock": false, 00:25:13.575 "num_base_bdevs": 3, 00:25:13.575 "num_base_bdevs_discovered": 1, 00:25:13.575 "num_base_bdevs_operational": 3, 00:25:13.575 "base_bdevs_list": [ 00:25:13.575 { 00:25:13.575 "name": "BaseBdev1", 00:25:13.575 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:13.575 "is_configured": true, 00:25:13.575 "data_offset": 0, 00:25:13.575 "data_size": 65536 00:25:13.575 }, 00:25:13.575 { 00:25:13.575 "name": null, 00:25:13.575 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:13.575 "is_configured": false, 00:25:13.575 "data_offset": 0, 00:25:13.575 "data_size": 65536 00:25:13.575 }, 00:25:13.575 { 00:25:13.575 "name": null, 00:25:13.575 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:13.575 "is_configured": false, 00:25:13.575 "data_offset": 0, 00:25:13.575 "data_size": 65536 00:25:13.575 } 00:25:13.575 ] 00:25:13.575 }' 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.575 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.836 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.837 [2024-11-08 17:12:50.497042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.837 "name": "Existed_Raid", 00:25:13.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.837 "strip_size_kb": 64, 00:25:13.837 "state": "configuring", 00:25:13.837 "raid_level": "raid5f", 00:25:13.837 "superblock": false, 00:25:13.837 "num_base_bdevs": 3, 00:25:13.837 "num_base_bdevs_discovered": 2, 00:25:13.837 "num_base_bdevs_operational": 3, 00:25:13.837 "base_bdevs_list": [ 00:25:13.837 { 00:25:13.837 "name": "BaseBdev1", 00:25:13.837 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:13.837 "is_configured": true, 00:25:13.837 "data_offset": 0, 00:25:13.837 "data_size": 65536 00:25:13.837 }, 00:25:13.837 { 00:25:13.837 "name": null, 00:25:13.837 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:13.837 "is_configured": false, 00:25:13.837 "data_offset": 0, 00:25:13.837 "data_size": 65536 00:25:13.837 }, 00:25:13.837 { 00:25:13.837 "name": "BaseBdev3", 00:25:13.837 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:13.837 "is_configured": true, 00:25:13.837 "data_offset": 0, 00:25:13.837 "data_size": 65536 00:25:13.837 } 00:25:13.837 ] 00:25:13.837 }' 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.837 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.410 [2024-11-08 17:12:50.853135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.410 "name": "Existed_Raid", 00:25:14.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.410 "strip_size_kb": 64, 00:25:14.410 "state": "configuring", 00:25:14.410 "raid_level": "raid5f", 00:25:14.410 "superblock": false, 00:25:14.410 "num_base_bdevs": 3, 00:25:14.410 "num_base_bdevs_discovered": 1, 00:25:14.410 "num_base_bdevs_operational": 3, 00:25:14.410 "base_bdevs_list": [ 00:25:14.410 { 00:25:14.410 "name": null, 00:25:14.410 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:14.410 "is_configured": false, 00:25:14.410 "data_offset": 0, 00:25:14.410 "data_size": 65536 00:25:14.410 }, 00:25:14.410 { 00:25:14.410 "name": null, 00:25:14.410 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:14.410 "is_configured": false, 00:25:14.410 "data_offset": 0, 00:25:14.410 "data_size": 65536 00:25:14.410 }, 00:25:14.410 { 00:25:14.410 "name": "BaseBdev3", 00:25:14.410 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:14.410 "is_configured": true, 00:25:14.410 "data_offset": 0, 00:25:14.410 "data_size": 65536 00:25:14.410 } 00:25:14.410 ] 00:25:14.410 }' 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.410 17:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.671 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.672 [2024-11-08 17:12:51.278498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.672 "name": "Existed_Raid", 00:25:14.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.672 "strip_size_kb": 64, 00:25:14.672 "state": "configuring", 00:25:14.672 "raid_level": "raid5f", 00:25:14.672 "superblock": false, 00:25:14.672 "num_base_bdevs": 3, 00:25:14.672 "num_base_bdevs_discovered": 2, 00:25:14.672 "num_base_bdevs_operational": 3, 00:25:14.672 "base_bdevs_list": [ 00:25:14.672 { 00:25:14.672 "name": null, 00:25:14.672 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:14.672 "is_configured": false, 00:25:14.672 "data_offset": 0, 00:25:14.672 "data_size": 65536 00:25:14.672 }, 00:25:14.672 { 00:25:14.672 "name": "BaseBdev2", 00:25:14.672 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:14.672 "is_configured": true, 00:25:14.672 "data_offset": 0, 00:25:14.672 "data_size": 65536 00:25:14.672 }, 00:25:14.672 { 00:25:14.672 "name": "BaseBdev3", 00:25:14.672 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:14.672 "is_configured": true, 00:25:14.672 "data_offset": 0, 00:25:14.672 "data_size": 65536 00:25:14.672 } 00:25:14.672 ] 00:25:14.672 }' 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.672 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.932 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.933 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c43a777f-34e0-4bdd-81be-8ff7420de0bd 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.195 [2024-11-08 17:12:51.679709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:15.195 [2024-11-08 17:12:51.679907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:15.195 [2024-11-08 17:12:51.679927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:15.195 [2024-11-08 17:12:51.680205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:15.195 [2024-11-08 17:12:51.684028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:15.195 [2024-11-08 17:12:51.684128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:15.195 [2024-11-08 17:12:51.684515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.195 NewBaseBdev 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.195 [ 00:25:15.195 { 00:25:15.195 "name": "NewBaseBdev", 00:25:15.195 "aliases": [ 00:25:15.195 "c43a777f-34e0-4bdd-81be-8ff7420de0bd" 00:25:15.195 ], 00:25:15.195 "product_name": "Malloc disk", 00:25:15.195 "block_size": 512, 00:25:15.195 "num_blocks": 65536, 00:25:15.195 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:15.195 "assigned_rate_limits": { 00:25:15.195 "rw_ios_per_sec": 0, 00:25:15.195 "rw_mbytes_per_sec": 0, 00:25:15.195 "r_mbytes_per_sec": 0, 00:25:15.195 "w_mbytes_per_sec": 0 00:25:15.195 }, 00:25:15.195 "claimed": true, 00:25:15.195 "claim_type": "exclusive_write", 00:25:15.195 "zoned": false, 00:25:15.195 "supported_io_types": { 00:25:15.195 "read": true, 00:25:15.195 "write": true, 00:25:15.195 "unmap": true, 00:25:15.195 "flush": true, 00:25:15.195 "reset": true, 00:25:15.195 "nvme_admin": false, 00:25:15.195 "nvme_io": false, 00:25:15.195 "nvme_io_md": false, 00:25:15.195 "write_zeroes": true, 00:25:15.195 "zcopy": true, 00:25:15.195 "get_zone_info": false, 00:25:15.195 "zone_management": false, 00:25:15.195 "zone_append": false, 00:25:15.195 "compare": false, 00:25:15.195 "compare_and_write": false, 00:25:15.195 "abort": true, 00:25:15.195 "seek_hole": false, 00:25:15.195 "seek_data": false, 00:25:15.195 "copy": true, 00:25:15.195 "nvme_iov_md": false 00:25:15.195 }, 00:25:15.195 "memory_domains": [ 00:25:15.195 { 00:25:15.195 "dma_device_id": "system", 00:25:15.195 "dma_device_type": 1 00:25:15.195 }, 00:25:15.195 { 00:25:15.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.195 "dma_device_type": 2 00:25:15.195 } 00:25:15.195 ], 00:25:15.195 "driver_specific": {} 00:25:15.195 } 00:25:15.195 ] 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.195 "name": "Existed_Raid", 00:25:15.195 "uuid": "34240729-208c-43e4-b6ef-ee244310af34", 00:25:15.195 "strip_size_kb": 64, 00:25:15.195 "state": "online", 00:25:15.195 "raid_level": "raid5f", 00:25:15.195 "superblock": false, 00:25:15.195 "num_base_bdevs": 3, 00:25:15.195 "num_base_bdevs_discovered": 3, 00:25:15.195 "num_base_bdevs_operational": 3, 00:25:15.195 "base_bdevs_list": [ 00:25:15.195 { 00:25:15.195 "name": "NewBaseBdev", 00:25:15.195 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:15.195 "is_configured": true, 00:25:15.195 "data_offset": 0, 00:25:15.195 "data_size": 65536 00:25:15.195 }, 00:25:15.195 { 00:25:15.195 "name": "BaseBdev2", 00:25:15.195 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:15.195 "is_configured": true, 00:25:15.195 "data_offset": 0, 00:25:15.195 "data_size": 65536 00:25:15.195 }, 00:25:15.195 { 00:25:15.195 "name": "BaseBdev3", 00:25:15.195 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:15.195 "is_configured": true, 00:25:15.195 "data_offset": 0, 00:25:15.195 "data_size": 65536 00:25:15.195 } 00:25:15.195 ] 00:25:15.195 }' 00:25:15.195 17:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.196 17:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.461 [2024-11-08 17:12:52.033109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.461 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.461 "name": "Existed_Raid", 00:25:15.461 "aliases": [ 00:25:15.461 "34240729-208c-43e4-b6ef-ee244310af34" 00:25:15.461 ], 00:25:15.461 "product_name": "Raid Volume", 00:25:15.461 "block_size": 512, 00:25:15.461 "num_blocks": 131072, 00:25:15.461 "uuid": "34240729-208c-43e4-b6ef-ee244310af34", 00:25:15.461 "assigned_rate_limits": { 00:25:15.461 "rw_ios_per_sec": 0, 00:25:15.461 "rw_mbytes_per_sec": 0, 00:25:15.461 "r_mbytes_per_sec": 0, 00:25:15.461 "w_mbytes_per_sec": 0 00:25:15.461 }, 00:25:15.461 "claimed": false, 00:25:15.461 "zoned": false, 00:25:15.461 "supported_io_types": { 00:25:15.461 "read": true, 00:25:15.461 "write": true, 00:25:15.461 "unmap": false, 00:25:15.461 "flush": false, 00:25:15.461 "reset": true, 00:25:15.461 "nvme_admin": false, 00:25:15.461 "nvme_io": false, 00:25:15.461 "nvme_io_md": false, 00:25:15.461 "write_zeroes": true, 00:25:15.461 "zcopy": false, 00:25:15.461 "get_zone_info": false, 00:25:15.461 "zone_management": false, 00:25:15.461 "zone_append": false, 00:25:15.461 "compare": false, 00:25:15.461 "compare_and_write": false, 00:25:15.461 "abort": false, 00:25:15.461 "seek_hole": false, 00:25:15.461 "seek_data": false, 00:25:15.461 "copy": false, 00:25:15.461 "nvme_iov_md": false 00:25:15.461 }, 00:25:15.461 "driver_specific": { 00:25:15.461 "raid": { 00:25:15.461 "uuid": "34240729-208c-43e4-b6ef-ee244310af34", 00:25:15.461 "strip_size_kb": 64, 00:25:15.461 "state": "online", 00:25:15.461 "raid_level": "raid5f", 00:25:15.461 "superblock": false, 00:25:15.461 "num_base_bdevs": 3, 00:25:15.462 "num_base_bdevs_discovered": 3, 00:25:15.462 "num_base_bdevs_operational": 3, 00:25:15.462 "base_bdevs_list": [ 00:25:15.462 { 00:25:15.462 "name": "NewBaseBdev", 00:25:15.462 "uuid": "c43a777f-34e0-4bdd-81be-8ff7420de0bd", 00:25:15.462 "is_configured": true, 00:25:15.462 "data_offset": 0, 00:25:15.462 "data_size": 65536 00:25:15.462 }, 00:25:15.462 { 00:25:15.462 "name": "BaseBdev2", 00:25:15.462 "uuid": "3cdf399e-b65c-4881-85de-8b802dbfb10a", 00:25:15.462 "is_configured": true, 00:25:15.462 "data_offset": 0, 00:25:15.462 "data_size": 65536 00:25:15.462 }, 00:25:15.462 { 00:25:15.462 "name": "BaseBdev3", 00:25:15.462 "uuid": "e80d2c63-4b10-469b-ae1d-ec39a722d46c", 00:25:15.462 "is_configured": true, 00:25:15.462 "data_offset": 0, 00:25:15.462 "data_size": 65536 00:25:15.462 } 00:25:15.462 ] 00:25:15.462 } 00:25:15.462 } 00:25:15.462 }' 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:15.462 BaseBdev2 00:25:15.462 BaseBdev3' 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:15.462 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.721 [2024-11-08 17:12:52.244938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:15.721 [2024-11-08 17:12:52.244969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.721 [2024-11-08 17:12:52.245057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.721 [2024-11-08 17:12:52.245357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.721 [2024-11-08 17:12:52.245371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78362 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 78362 ']' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 78362 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78362 00:25:15.721 killing process with pid 78362 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78362' 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 78362 00:25:15.721 17:12:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 78362 00:25:15.721 [2024-11-08 17:12:52.278193] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:15.979 [2024-11-08 17:12:52.478124] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:16.545 ************************************ 00:25:16.545 END TEST raid5f_state_function_test 00:25:16.545 ************************************ 00:25:16.545 17:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:16.545 00:25:16.545 real 0m8.054s 00:25:16.545 user 0m12.686s 00:25:16.545 sys 0m1.409s 00:25:16.545 17:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:16.545 17:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.804 17:12:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:25:16.804 17:12:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:16.804 17:12:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:16.804 17:12:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:16.804 ************************************ 00:25:16.804 START TEST raid5f_state_function_test_sb 00:25:16.804 ************************************ 00:25:16.804 17:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:16.805 Process raid pid: 78956 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78956 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78956' 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78956 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78956 ']' 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:16.805 17:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.805 [2024-11-08 17:12:53.388862] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:25:16.805 [2024-11-08 17:12:53.388995] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.062 [2024-11-08 17:12:53.551849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.062 [2024-11-08 17:12:53.671766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.320 [2024-11-08 17:12:53.820661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:17.320 [2024-11-08 17:12:53.820697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:17.578 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.579 [2024-11-08 17:12:54.248780] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:17.579 [2024-11-08 17:12:54.248838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:17.579 [2024-11-08 17:12:54.248850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:17.579 [2024-11-08 17:12:54.248862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:17.579 [2024-11-08 17:12:54.248869] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:17.579 [2024-11-08 17:12:54.248879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.579 "name": "Existed_Raid", 00:25:17.579 "uuid": "ba7c48fa-7a2e-4ebe-b63e-cf0901a9b82c", 00:25:17.579 "strip_size_kb": 64, 00:25:17.579 "state": "configuring", 00:25:17.579 "raid_level": "raid5f", 00:25:17.579 "superblock": true, 00:25:17.579 "num_base_bdevs": 3, 00:25:17.579 "num_base_bdevs_discovered": 0, 00:25:17.579 "num_base_bdevs_operational": 3, 00:25:17.579 "base_bdevs_list": [ 00:25:17.579 { 00:25:17.579 "name": "BaseBdev1", 00:25:17.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.579 "is_configured": false, 00:25:17.579 "data_offset": 0, 00:25:17.579 "data_size": 0 00:25:17.579 }, 00:25:17.579 { 00:25:17.579 "name": "BaseBdev2", 00:25:17.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.579 "is_configured": false, 00:25:17.579 "data_offset": 0, 00:25:17.579 "data_size": 0 00:25:17.579 }, 00:25:17.579 { 00:25:17.579 "name": "BaseBdev3", 00:25:17.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.579 "is_configured": false, 00:25:17.579 "data_offset": 0, 00:25:17.579 "data_size": 0 00:25:17.579 } 00:25:17.579 ] 00:25:17.579 }' 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.579 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.144 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:18.144 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.144 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.144 [2024-11-08 17:12:54.592808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:18.144 [2024-11-08 17:12:54.592853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:18.144 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.144 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:18.144 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.144 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.144 [2024-11-08 17:12:54.600799] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:18.145 [2024-11-08 17:12:54.600845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:18.145 [2024-11-08 17:12:54.600855] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:18.145 [2024-11-08 17:12:54.600866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:18.145 [2024-11-08 17:12:54.600873] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:18.145 [2024-11-08 17:12:54.600883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.145 BaseBdev1 00:25:18.145 [2024-11-08 17:12:54.633381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.145 [ 00:25:18.145 { 00:25:18.145 "name": "BaseBdev1", 00:25:18.145 "aliases": [ 00:25:18.145 "36d2aa21-da0d-44ef-868c-2f2e2261e4bb" 00:25:18.145 ], 00:25:18.145 "product_name": "Malloc disk", 00:25:18.145 "block_size": 512, 00:25:18.145 "num_blocks": 65536, 00:25:18.145 "uuid": "36d2aa21-da0d-44ef-868c-2f2e2261e4bb", 00:25:18.145 "assigned_rate_limits": { 00:25:18.145 "rw_ios_per_sec": 0, 00:25:18.145 "rw_mbytes_per_sec": 0, 00:25:18.145 "r_mbytes_per_sec": 0, 00:25:18.145 "w_mbytes_per_sec": 0 00:25:18.145 }, 00:25:18.145 "claimed": true, 00:25:18.145 "claim_type": "exclusive_write", 00:25:18.145 "zoned": false, 00:25:18.145 "supported_io_types": { 00:25:18.145 "read": true, 00:25:18.145 "write": true, 00:25:18.145 "unmap": true, 00:25:18.145 "flush": true, 00:25:18.145 "reset": true, 00:25:18.145 "nvme_admin": false, 00:25:18.145 "nvme_io": false, 00:25:18.145 "nvme_io_md": false, 00:25:18.145 "write_zeroes": true, 00:25:18.145 "zcopy": true, 00:25:18.145 "get_zone_info": false, 00:25:18.145 "zone_management": false, 00:25:18.145 "zone_append": false, 00:25:18.145 "compare": false, 00:25:18.145 "compare_and_write": false, 00:25:18.145 "abort": true, 00:25:18.145 "seek_hole": false, 00:25:18.145 "seek_data": false, 00:25:18.145 "copy": true, 00:25:18.145 "nvme_iov_md": false 00:25:18.145 }, 00:25:18.145 "memory_domains": [ 00:25:18.145 { 00:25:18.145 "dma_device_id": "system", 00:25:18.145 "dma_device_type": 1 00:25:18.145 }, 00:25:18.145 { 00:25:18.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.145 "dma_device_type": 2 00:25:18.145 } 00:25:18.145 ], 00:25:18.145 "driver_specific": {} 00:25:18.145 } 00:25:18.145 ] 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.145 "name": "Existed_Raid", 00:25:18.145 "uuid": "4254633b-c0eb-4b8e-af34-9e227d97484d", 00:25:18.145 "strip_size_kb": 64, 00:25:18.145 "state": "configuring", 00:25:18.145 "raid_level": "raid5f", 00:25:18.145 "superblock": true, 00:25:18.145 "num_base_bdevs": 3, 00:25:18.145 "num_base_bdevs_discovered": 1, 00:25:18.145 "num_base_bdevs_operational": 3, 00:25:18.145 "base_bdevs_list": [ 00:25:18.145 { 00:25:18.145 "name": "BaseBdev1", 00:25:18.145 "uuid": "36d2aa21-da0d-44ef-868c-2f2e2261e4bb", 00:25:18.145 "is_configured": true, 00:25:18.145 "data_offset": 2048, 00:25:18.145 "data_size": 63488 00:25:18.145 }, 00:25:18.145 { 00:25:18.145 "name": "BaseBdev2", 00:25:18.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.145 "is_configured": false, 00:25:18.145 "data_offset": 0, 00:25:18.145 "data_size": 0 00:25:18.145 }, 00:25:18.145 { 00:25:18.145 "name": "BaseBdev3", 00:25:18.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.145 "is_configured": false, 00:25:18.145 "data_offset": 0, 00:25:18.145 "data_size": 0 00:25:18.145 } 00:25:18.145 ] 00:25:18.145 }' 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.145 17:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.404 [2024-11-08 17:12:55.021534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:18.404 [2024-11-08 17:12:55.021712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.404 [2024-11-08 17:12:55.029579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:18.404 [2024-11-08 17:12:55.031562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:18.404 [2024-11-08 17:12:55.031686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:18.404 [2024-11-08 17:12:55.031745] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:18.404 [2024-11-08 17:12:55.031784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.404 "name": "Existed_Raid", 00:25:18.404 "uuid": "ab259c72-2369-40f7-a5d6-7c70d1805d7f", 00:25:18.404 "strip_size_kb": 64, 00:25:18.404 "state": "configuring", 00:25:18.404 "raid_level": "raid5f", 00:25:18.404 "superblock": true, 00:25:18.404 "num_base_bdevs": 3, 00:25:18.404 "num_base_bdevs_discovered": 1, 00:25:18.404 "num_base_bdevs_operational": 3, 00:25:18.404 "base_bdevs_list": [ 00:25:18.404 { 00:25:18.404 "name": "BaseBdev1", 00:25:18.404 "uuid": "36d2aa21-da0d-44ef-868c-2f2e2261e4bb", 00:25:18.404 "is_configured": true, 00:25:18.404 "data_offset": 2048, 00:25:18.404 "data_size": 63488 00:25:18.404 }, 00:25:18.404 { 00:25:18.404 "name": "BaseBdev2", 00:25:18.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.404 "is_configured": false, 00:25:18.404 "data_offset": 0, 00:25:18.404 "data_size": 0 00:25:18.404 }, 00:25:18.404 { 00:25:18.404 "name": "BaseBdev3", 00:25:18.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.404 "is_configured": false, 00:25:18.404 "data_offset": 0, 00:25:18.404 "data_size": 0 00:25:18.404 } 00:25:18.404 ] 00:25:18.404 }' 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.404 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.662 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:18.662 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.662 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.921 [2024-11-08 17:12:55.384344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.921 BaseBdev2 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.921 [ 00:25:18.921 { 00:25:18.921 "name": "BaseBdev2", 00:25:18.921 "aliases": [ 00:25:18.921 "93b7f710-7030-4819-b341-ec5b59b58fb6" 00:25:18.921 ], 00:25:18.921 "product_name": "Malloc disk", 00:25:18.921 "block_size": 512, 00:25:18.921 "num_blocks": 65536, 00:25:18.921 "uuid": "93b7f710-7030-4819-b341-ec5b59b58fb6", 00:25:18.921 "assigned_rate_limits": { 00:25:18.921 "rw_ios_per_sec": 0, 00:25:18.921 "rw_mbytes_per_sec": 0, 00:25:18.921 "r_mbytes_per_sec": 0, 00:25:18.921 "w_mbytes_per_sec": 0 00:25:18.921 }, 00:25:18.921 "claimed": true, 00:25:18.921 "claim_type": "exclusive_write", 00:25:18.921 "zoned": false, 00:25:18.921 "supported_io_types": { 00:25:18.921 "read": true, 00:25:18.921 "write": true, 00:25:18.921 "unmap": true, 00:25:18.921 "flush": true, 00:25:18.921 "reset": true, 00:25:18.921 "nvme_admin": false, 00:25:18.921 "nvme_io": false, 00:25:18.921 "nvme_io_md": false, 00:25:18.921 "write_zeroes": true, 00:25:18.921 "zcopy": true, 00:25:18.921 "get_zone_info": false, 00:25:18.921 "zone_management": false, 00:25:18.921 "zone_append": false, 00:25:18.921 "compare": false, 00:25:18.921 "compare_and_write": false, 00:25:18.921 "abort": true, 00:25:18.921 "seek_hole": false, 00:25:18.921 "seek_data": false, 00:25:18.921 "copy": true, 00:25:18.921 "nvme_iov_md": false 00:25:18.921 }, 00:25:18.921 "memory_domains": [ 00:25:18.921 { 00:25:18.921 "dma_device_id": "system", 00:25:18.921 "dma_device_type": 1 00:25:18.921 }, 00:25:18.921 { 00:25:18.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.921 "dma_device_type": 2 00:25:18.921 } 00:25:18.921 ], 00:25:18.921 "driver_specific": {} 00:25:18.921 } 00:25:18.921 ] 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.921 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.922 "name": "Existed_Raid", 00:25:18.922 "uuid": "ab259c72-2369-40f7-a5d6-7c70d1805d7f", 00:25:18.922 "strip_size_kb": 64, 00:25:18.922 "state": "configuring", 00:25:18.922 "raid_level": "raid5f", 00:25:18.922 "superblock": true, 00:25:18.922 "num_base_bdevs": 3, 00:25:18.922 "num_base_bdevs_discovered": 2, 00:25:18.922 "num_base_bdevs_operational": 3, 00:25:18.922 "base_bdevs_list": [ 00:25:18.922 { 00:25:18.922 "name": "BaseBdev1", 00:25:18.922 "uuid": "36d2aa21-da0d-44ef-868c-2f2e2261e4bb", 00:25:18.922 "is_configured": true, 00:25:18.922 "data_offset": 2048, 00:25:18.922 "data_size": 63488 00:25:18.922 }, 00:25:18.922 { 00:25:18.922 "name": "BaseBdev2", 00:25:18.922 "uuid": "93b7f710-7030-4819-b341-ec5b59b58fb6", 00:25:18.922 "is_configured": true, 00:25:18.922 "data_offset": 2048, 00:25:18.922 "data_size": 63488 00:25:18.922 }, 00:25:18.922 { 00:25:18.922 "name": "BaseBdev3", 00:25:18.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.922 "is_configured": false, 00:25:18.922 "data_offset": 0, 00:25:18.922 "data_size": 0 00:25:18.922 } 00:25:18.922 ] 00:25:18.922 }' 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.922 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.180 [2024-11-08 17:12:55.800383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:19.180 [2024-11-08 17:12:55.801035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:19.180 [2024-11-08 17:12:55.801208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:19.180 [2024-11-08 17:12:55.801534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:19.180 BaseBdev3 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.180 [2024-11-08 17:12:55.805446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:19.180 [2024-11-08 17:12:55.805465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:19.180 [2024-11-08 17:12:55.805649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.180 [ 00:25:19.180 { 00:25:19.180 "name": "BaseBdev3", 00:25:19.180 "aliases": [ 00:25:19.180 "306e99c6-d860-443d-90c5-58c6a096bad6" 00:25:19.180 ], 00:25:19.180 "product_name": "Malloc disk", 00:25:19.180 "block_size": 512, 00:25:19.180 "num_blocks": 65536, 00:25:19.180 "uuid": "306e99c6-d860-443d-90c5-58c6a096bad6", 00:25:19.180 "assigned_rate_limits": { 00:25:19.180 "rw_ios_per_sec": 0, 00:25:19.180 "rw_mbytes_per_sec": 0, 00:25:19.180 "r_mbytes_per_sec": 0, 00:25:19.180 "w_mbytes_per_sec": 0 00:25:19.180 }, 00:25:19.180 "claimed": true, 00:25:19.180 "claim_type": "exclusive_write", 00:25:19.180 "zoned": false, 00:25:19.180 "supported_io_types": { 00:25:19.180 "read": true, 00:25:19.180 "write": true, 00:25:19.180 "unmap": true, 00:25:19.180 "flush": true, 00:25:19.180 "reset": true, 00:25:19.180 "nvme_admin": false, 00:25:19.180 "nvme_io": false, 00:25:19.180 "nvme_io_md": false, 00:25:19.180 "write_zeroes": true, 00:25:19.180 "zcopy": true, 00:25:19.180 "get_zone_info": false, 00:25:19.180 "zone_management": false, 00:25:19.180 "zone_append": false, 00:25:19.180 "compare": false, 00:25:19.180 "compare_and_write": false, 00:25:19.180 "abort": true, 00:25:19.180 "seek_hole": false, 00:25:19.180 "seek_data": false, 00:25:19.180 "copy": true, 00:25:19.180 "nvme_iov_md": false 00:25:19.180 }, 00:25:19.180 "memory_domains": [ 00:25:19.180 { 00:25:19.180 "dma_device_id": "system", 00:25:19.180 "dma_device_type": 1 00:25:19.180 }, 00:25:19.180 { 00:25:19.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.180 "dma_device_type": 2 00:25:19.180 } 00:25:19.180 ], 00:25:19.180 "driver_specific": {} 00:25:19.180 } 00:25:19.180 ] 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.180 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.180 "name": "Existed_Raid", 00:25:19.180 "uuid": "ab259c72-2369-40f7-a5d6-7c70d1805d7f", 00:25:19.180 "strip_size_kb": 64, 00:25:19.180 "state": "online", 00:25:19.180 "raid_level": "raid5f", 00:25:19.180 "superblock": true, 00:25:19.180 "num_base_bdevs": 3, 00:25:19.180 "num_base_bdevs_discovered": 3, 00:25:19.181 "num_base_bdevs_operational": 3, 00:25:19.181 "base_bdevs_list": [ 00:25:19.181 { 00:25:19.181 "name": "BaseBdev1", 00:25:19.181 "uuid": "36d2aa21-da0d-44ef-868c-2f2e2261e4bb", 00:25:19.181 "is_configured": true, 00:25:19.181 "data_offset": 2048, 00:25:19.181 "data_size": 63488 00:25:19.181 }, 00:25:19.181 { 00:25:19.181 "name": "BaseBdev2", 00:25:19.181 "uuid": "93b7f710-7030-4819-b341-ec5b59b58fb6", 00:25:19.181 "is_configured": true, 00:25:19.181 "data_offset": 2048, 00:25:19.181 "data_size": 63488 00:25:19.181 }, 00:25:19.181 { 00:25:19.181 "name": "BaseBdev3", 00:25:19.181 "uuid": "306e99c6-d860-443d-90c5-58c6a096bad6", 00:25:19.181 "is_configured": true, 00:25:19.181 "data_offset": 2048, 00:25:19.181 "data_size": 63488 00:25:19.181 } 00:25:19.181 ] 00:25:19.181 }' 00:25:19.181 17:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.181 17:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.438 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:19.438 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:19.438 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:19.438 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:19.438 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:19.438 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:19.696 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.697 [2024-11-08 17:12:56.158290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.697 "name": "Existed_Raid", 00:25:19.697 "aliases": [ 00:25:19.697 "ab259c72-2369-40f7-a5d6-7c70d1805d7f" 00:25:19.697 ], 00:25:19.697 "product_name": "Raid Volume", 00:25:19.697 "block_size": 512, 00:25:19.697 "num_blocks": 126976, 00:25:19.697 "uuid": "ab259c72-2369-40f7-a5d6-7c70d1805d7f", 00:25:19.697 "assigned_rate_limits": { 00:25:19.697 "rw_ios_per_sec": 0, 00:25:19.697 "rw_mbytes_per_sec": 0, 00:25:19.697 "r_mbytes_per_sec": 0, 00:25:19.697 "w_mbytes_per_sec": 0 00:25:19.697 }, 00:25:19.697 "claimed": false, 00:25:19.697 "zoned": false, 00:25:19.697 "supported_io_types": { 00:25:19.697 "read": true, 00:25:19.697 "write": true, 00:25:19.697 "unmap": false, 00:25:19.697 "flush": false, 00:25:19.697 "reset": true, 00:25:19.697 "nvme_admin": false, 00:25:19.697 "nvme_io": false, 00:25:19.697 "nvme_io_md": false, 00:25:19.697 "write_zeroes": true, 00:25:19.697 "zcopy": false, 00:25:19.697 "get_zone_info": false, 00:25:19.697 "zone_management": false, 00:25:19.697 "zone_append": false, 00:25:19.697 "compare": false, 00:25:19.697 "compare_and_write": false, 00:25:19.697 "abort": false, 00:25:19.697 "seek_hole": false, 00:25:19.697 "seek_data": false, 00:25:19.697 "copy": false, 00:25:19.697 "nvme_iov_md": false 00:25:19.697 }, 00:25:19.697 "driver_specific": { 00:25:19.697 "raid": { 00:25:19.697 "uuid": "ab259c72-2369-40f7-a5d6-7c70d1805d7f", 00:25:19.697 "strip_size_kb": 64, 00:25:19.697 "state": "online", 00:25:19.697 "raid_level": "raid5f", 00:25:19.697 "superblock": true, 00:25:19.697 "num_base_bdevs": 3, 00:25:19.697 "num_base_bdevs_discovered": 3, 00:25:19.697 "num_base_bdevs_operational": 3, 00:25:19.697 "base_bdevs_list": [ 00:25:19.697 { 00:25:19.697 "name": "BaseBdev1", 00:25:19.697 "uuid": "36d2aa21-da0d-44ef-868c-2f2e2261e4bb", 00:25:19.697 "is_configured": true, 00:25:19.697 "data_offset": 2048, 00:25:19.697 "data_size": 63488 00:25:19.697 }, 00:25:19.697 { 00:25:19.697 "name": "BaseBdev2", 00:25:19.697 "uuid": "93b7f710-7030-4819-b341-ec5b59b58fb6", 00:25:19.697 "is_configured": true, 00:25:19.697 "data_offset": 2048, 00:25:19.697 "data_size": 63488 00:25:19.697 }, 00:25:19.697 { 00:25:19.697 "name": "BaseBdev3", 00:25:19.697 "uuid": "306e99c6-d860-443d-90c5-58c6a096bad6", 00:25:19.697 "is_configured": true, 00:25:19.697 "data_offset": 2048, 00:25:19.697 "data_size": 63488 00:25:19.697 } 00:25:19.697 ] 00:25:19.697 } 00:25:19.697 } 00:25:19.697 }' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:19.697 BaseBdev2 00:25:19.697 BaseBdev3' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.697 [2024-11-08 17:12:56.342337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:19.697 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.956 "name": "Existed_Raid", 00:25:19.956 "uuid": "ab259c72-2369-40f7-a5d6-7c70d1805d7f", 00:25:19.956 "strip_size_kb": 64, 00:25:19.956 "state": "online", 00:25:19.956 "raid_level": "raid5f", 00:25:19.956 "superblock": true, 00:25:19.956 "num_base_bdevs": 3, 00:25:19.956 "num_base_bdevs_discovered": 2, 00:25:19.956 "num_base_bdevs_operational": 2, 00:25:19.956 "base_bdevs_list": [ 00:25:19.956 { 00:25:19.956 "name": null, 00:25:19.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.956 "is_configured": false, 00:25:19.956 "data_offset": 0, 00:25:19.956 "data_size": 63488 00:25:19.956 }, 00:25:19.956 { 00:25:19.956 "name": "BaseBdev2", 00:25:19.956 "uuid": "93b7f710-7030-4819-b341-ec5b59b58fb6", 00:25:19.956 "is_configured": true, 00:25:19.956 "data_offset": 2048, 00:25:19.956 "data_size": 63488 00:25:19.956 }, 00:25:19.956 { 00:25:19.956 "name": "BaseBdev3", 00:25:19.956 "uuid": "306e99c6-d860-443d-90c5-58c6a096bad6", 00:25:19.956 "is_configured": true, 00:25:19.956 "data_offset": 2048, 00:25:19.956 "data_size": 63488 00:25:19.956 } 00:25:19.956 ] 00:25:19.956 }' 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.956 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.214 [2024-11-08 17:12:56.768671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:20.214 [2024-11-08 17:12:56.768952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.214 [2024-11-08 17:12:56.831195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.214 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.214 [2024-11-08 17:12:56.875240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:20.214 [2024-11-08 17:12:56.875285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.473 17:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.473 BaseBdev2 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.473 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.473 [ 00:25:20.473 { 00:25:20.473 "name": "BaseBdev2", 00:25:20.473 "aliases": [ 00:25:20.473 "70874a3c-d9da-4016-85f0-d6e9356c88a8" 00:25:20.473 ], 00:25:20.473 "product_name": "Malloc disk", 00:25:20.473 "block_size": 512, 00:25:20.473 "num_blocks": 65536, 00:25:20.473 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:20.473 "assigned_rate_limits": { 00:25:20.473 "rw_ios_per_sec": 0, 00:25:20.473 "rw_mbytes_per_sec": 0, 00:25:20.473 "r_mbytes_per_sec": 0, 00:25:20.473 "w_mbytes_per_sec": 0 00:25:20.473 }, 00:25:20.473 "claimed": false, 00:25:20.473 "zoned": false, 00:25:20.473 "supported_io_types": { 00:25:20.473 "read": true, 00:25:20.473 "write": true, 00:25:20.473 "unmap": true, 00:25:20.473 "flush": true, 00:25:20.474 "reset": true, 00:25:20.474 "nvme_admin": false, 00:25:20.474 "nvme_io": false, 00:25:20.474 "nvme_io_md": false, 00:25:20.474 "write_zeroes": true, 00:25:20.474 "zcopy": true, 00:25:20.474 "get_zone_info": false, 00:25:20.474 "zone_management": false, 00:25:20.474 "zone_append": false, 00:25:20.474 "compare": false, 00:25:20.474 "compare_and_write": false, 00:25:20.474 "abort": true, 00:25:20.474 "seek_hole": false, 00:25:20.474 "seek_data": false, 00:25:20.474 "copy": true, 00:25:20.474 "nvme_iov_md": false 00:25:20.474 }, 00:25:20.474 "memory_domains": [ 00:25:20.474 { 00:25:20.474 "dma_device_id": "system", 00:25:20.474 "dma_device_type": 1 00:25:20.474 }, 00:25:20.474 { 00:25:20.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.474 "dma_device_type": 2 00:25:20.474 } 00:25:20.474 ], 00:25:20.474 "driver_specific": {} 00:25:20.474 } 00:25:20.474 ] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 BaseBdev3 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 [ 00:25:20.474 { 00:25:20.474 "name": "BaseBdev3", 00:25:20.474 "aliases": [ 00:25:20.474 "b1d00930-e314-45dd-b8d1-cff07b78c859" 00:25:20.474 ], 00:25:20.474 "product_name": "Malloc disk", 00:25:20.474 "block_size": 512, 00:25:20.474 "num_blocks": 65536, 00:25:20.474 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:20.474 "assigned_rate_limits": { 00:25:20.474 "rw_ios_per_sec": 0, 00:25:20.474 "rw_mbytes_per_sec": 0, 00:25:20.474 "r_mbytes_per_sec": 0, 00:25:20.474 "w_mbytes_per_sec": 0 00:25:20.474 }, 00:25:20.474 "claimed": false, 00:25:20.474 "zoned": false, 00:25:20.474 "supported_io_types": { 00:25:20.474 "read": true, 00:25:20.474 "write": true, 00:25:20.474 "unmap": true, 00:25:20.474 "flush": true, 00:25:20.474 "reset": true, 00:25:20.474 "nvme_admin": false, 00:25:20.474 "nvme_io": false, 00:25:20.474 "nvme_io_md": false, 00:25:20.474 "write_zeroes": true, 00:25:20.474 "zcopy": true, 00:25:20.474 "get_zone_info": false, 00:25:20.474 "zone_management": false, 00:25:20.474 "zone_append": false, 00:25:20.474 "compare": false, 00:25:20.474 "compare_and_write": false, 00:25:20.474 "abort": true, 00:25:20.474 "seek_hole": false, 00:25:20.474 "seek_data": false, 00:25:20.474 "copy": true, 00:25:20.474 "nvme_iov_md": false 00:25:20.474 }, 00:25:20.474 "memory_domains": [ 00:25:20.474 { 00:25:20.474 "dma_device_id": "system", 00:25:20.474 "dma_device_type": 1 00:25:20.474 }, 00:25:20.474 { 00:25:20.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.474 "dma_device_type": 2 00:25:20.474 } 00:25:20.474 ], 00:25:20.474 "driver_specific": {} 00:25:20.474 } 00:25:20.474 ] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 [2024-11-08 17:12:57.094193] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:20.474 [2024-11-08 17:12:57.094341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:20.474 [2024-11-08 17:12:57.094416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:20.474 [2024-11-08 17:12:57.096382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.474 "name": "Existed_Raid", 00:25:20.474 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:20.474 "strip_size_kb": 64, 00:25:20.474 "state": "configuring", 00:25:20.474 "raid_level": "raid5f", 00:25:20.474 "superblock": true, 00:25:20.474 "num_base_bdevs": 3, 00:25:20.474 "num_base_bdevs_discovered": 2, 00:25:20.474 "num_base_bdevs_operational": 3, 00:25:20.474 "base_bdevs_list": [ 00:25:20.474 { 00:25:20.474 "name": "BaseBdev1", 00:25:20.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.474 "is_configured": false, 00:25:20.474 "data_offset": 0, 00:25:20.474 "data_size": 0 00:25:20.474 }, 00:25:20.474 { 00:25:20.474 "name": "BaseBdev2", 00:25:20.474 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:20.474 "is_configured": true, 00:25:20.474 "data_offset": 2048, 00:25:20.474 "data_size": 63488 00:25:20.474 }, 00:25:20.474 { 00:25:20.474 "name": "BaseBdev3", 00:25:20.474 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:20.474 "is_configured": true, 00:25:20.474 "data_offset": 2048, 00:25:20.474 "data_size": 63488 00:25:20.474 } 00:25:20.474 ] 00:25:20.474 }' 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.474 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.733 [2024-11-08 17:12:57.426274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.733 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.991 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.991 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.991 "name": "Existed_Raid", 00:25:20.991 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:20.991 "strip_size_kb": 64, 00:25:20.991 "state": "configuring", 00:25:20.991 "raid_level": "raid5f", 00:25:20.991 "superblock": true, 00:25:20.991 "num_base_bdevs": 3, 00:25:20.991 "num_base_bdevs_discovered": 1, 00:25:20.991 "num_base_bdevs_operational": 3, 00:25:20.991 "base_bdevs_list": [ 00:25:20.991 { 00:25:20.991 "name": "BaseBdev1", 00:25:20.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.991 "is_configured": false, 00:25:20.991 "data_offset": 0, 00:25:20.991 "data_size": 0 00:25:20.991 }, 00:25:20.991 { 00:25:20.991 "name": null, 00:25:20.991 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:20.991 "is_configured": false, 00:25:20.991 "data_offset": 0, 00:25:20.991 "data_size": 63488 00:25:20.991 }, 00:25:20.991 { 00:25:20.991 "name": "BaseBdev3", 00:25:20.991 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:20.991 "is_configured": true, 00:25:20.991 "data_offset": 2048, 00:25:20.991 "data_size": 63488 00:25:20.991 } 00:25:20.991 ] 00:25:20.991 }' 00:25:20.991 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.991 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.251 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.251 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.251 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.252 [2024-11-08 17:12:57.818776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.252 BaseBdev1 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.252 [ 00:25:21.252 { 00:25:21.252 "name": "BaseBdev1", 00:25:21.252 "aliases": [ 00:25:21.252 "15d39e97-b349-421a-bc00-c599998ff5e1" 00:25:21.252 ], 00:25:21.252 "product_name": "Malloc disk", 00:25:21.252 "block_size": 512, 00:25:21.252 "num_blocks": 65536, 00:25:21.252 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:21.252 "assigned_rate_limits": { 00:25:21.252 "rw_ios_per_sec": 0, 00:25:21.252 "rw_mbytes_per_sec": 0, 00:25:21.252 "r_mbytes_per_sec": 0, 00:25:21.252 "w_mbytes_per_sec": 0 00:25:21.252 }, 00:25:21.252 "claimed": true, 00:25:21.252 "claim_type": "exclusive_write", 00:25:21.252 "zoned": false, 00:25:21.252 "supported_io_types": { 00:25:21.252 "read": true, 00:25:21.252 "write": true, 00:25:21.252 "unmap": true, 00:25:21.252 "flush": true, 00:25:21.252 "reset": true, 00:25:21.252 "nvme_admin": false, 00:25:21.252 "nvme_io": false, 00:25:21.252 "nvme_io_md": false, 00:25:21.252 "write_zeroes": true, 00:25:21.252 "zcopy": true, 00:25:21.252 "get_zone_info": false, 00:25:21.252 "zone_management": false, 00:25:21.252 "zone_append": false, 00:25:21.252 "compare": false, 00:25:21.252 "compare_and_write": false, 00:25:21.252 "abort": true, 00:25:21.252 "seek_hole": false, 00:25:21.252 "seek_data": false, 00:25:21.252 "copy": true, 00:25:21.252 "nvme_iov_md": false 00:25:21.252 }, 00:25:21.252 "memory_domains": [ 00:25:21.252 { 00:25:21.252 "dma_device_id": "system", 00:25:21.252 "dma_device_type": 1 00:25:21.252 }, 00:25:21.252 { 00:25:21.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.252 "dma_device_type": 2 00:25:21.252 } 00:25:21.252 ], 00:25:21.252 "driver_specific": {} 00:25:21.252 } 00:25:21.252 ] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.252 "name": "Existed_Raid", 00:25:21.252 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:21.252 "strip_size_kb": 64, 00:25:21.252 "state": "configuring", 00:25:21.252 "raid_level": "raid5f", 00:25:21.252 "superblock": true, 00:25:21.252 "num_base_bdevs": 3, 00:25:21.252 "num_base_bdevs_discovered": 2, 00:25:21.252 "num_base_bdevs_operational": 3, 00:25:21.252 "base_bdevs_list": [ 00:25:21.252 { 00:25:21.252 "name": "BaseBdev1", 00:25:21.252 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:21.252 "is_configured": true, 00:25:21.252 "data_offset": 2048, 00:25:21.252 "data_size": 63488 00:25:21.252 }, 00:25:21.252 { 00:25:21.252 "name": null, 00:25:21.252 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:21.252 "is_configured": false, 00:25:21.252 "data_offset": 0, 00:25:21.252 "data_size": 63488 00:25:21.252 }, 00:25:21.252 { 00:25:21.252 "name": "BaseBdev3", 00:25:21.252 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:21.252 "is_configured": true, 00:25:21.252 "data_offset": 2048, 00:25:21.252 "data_size": 63488 00:25:21.252 } 00:25:21.252 ] 00:25:21.252 }' 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.252 17:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.511 [2024-11-08 17:12:58.218929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.511 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.769 "name": "Existed_Raid", 00:25:21.769 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:21.769 "strip_size_kb": 64, 00:25:21.769 "state": "configuring", 00:25:21.769 "raid_level": "raid5f", 00:25:21.769 "superblock": true, 00:25:21.769 "num_base_bdevs": 3, 00:25:21.769 "num_base_bdevs_discovered": 1, 00:25:21.769 "num_base_bdevs_operational": 3, 00:25:21.769 "base_bdevs_list": [ 00:25:21.769 { 00:25:21.769 "name": "BaseBdev1", 00:25:21.769 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:21.769 "is_configured": true, 00:25:21.769 "data_offset": 2048, 00:25:21.769 "data_size": 63488 00:25:21.769 }, 00:25:21.769 { 00:25:21.769 "name": null, 00:25:21.769 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:21.769 "is_configured": false, 00:25:21.769 "data_offset": 0, 00:25:21.769 "data_size": 63488 00:25:21.769 }, 00:25:21.769 { 00:25:21.769 "name": null, 00:25:21.769 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:21.769 "is_configured": false, 00:25:21.769 "data_offset": 0, 00:25:21.769 "data_size": 63488 00:25:21.769 } 00:25:21.769 ] 00:25:21.769 }' 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.769 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.027 [2024-11-08 17:12:58.591076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.027 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.028 "name": "Existed_Raid", 00:25:22.028 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:22.028 "strip_size_kb": 64, 00:25:22.028 "state": "configuring", 00:25:22.028 "raid_level": "raid5f", 00:25:22.028 "superblock": true, 00:25:22.028 "num_base_bdevs": 3, 00:25:22.028 "num_base_bdevs_discovered": 2, 00:25:22.028 "num_base_bdevs_operational": 3, 00:25:22.028 "base_bdevs_list": [ 00:25:22.028 { 00:25:22.028 "name": "BaseBdev1", 00:25:22.028 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:22.028 "is_configured": true, 00:25:22.028 "data_offset": 2048, 00:25:22.028 "data_size": 63488 00:25:22.028 }, 00:25:22.028 { 00:25:22.028 "name": null, 00:25:22.028 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:22.028 "is_configured": false, 00:25:22.028 "data_offset": 0, 00:25:22.028 "data_size": 63488 00:25:22.028 }, 00:25:22.028 { 00:25:22.028 "name": "BaseBdev3", 00:25:22.028 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:22.028 "is_configured": true, 00:25:22.028 "data_offset": 2048, 00:25:22.028 "data_size": 63488 00:25:22.028 } 00:25:22.028 ] 00:25:22.028 }' 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.028 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.285 17:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.286 [2024-11-08 17:12:58.939173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.544 "name": "Existed_Raid", 00:25:22.544 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:22.544 "strip_size_kb": 64, 00:25:22.544 "state": "configuring", 00:25:22.544 "raid_level": "raid5f", 00:25:22.544 "superblock": true, 00:25:22.544 "num_base_bdevs": 3, 00:25:22.544 "num_base_bdevs_discovered": 1, 00:25:22.544 "num_base_bdevs_operational": 3, 00:25:22.544 "base_bdevs_list": [ 00:25:22.544 { 00:25:22.544 "name": null, 00:25:22.544 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:22.544 "is_configured": false, 00:25:22.544 "data_offset": 0, 00:25:22.544 "data_size": 63488 00:25:22.544 }, 00:25:22.544 { 00:25:22.544 "name": null, 00:25:22.544 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:22.544 "is_configured": false, 00:25:22.544 "data_offset": 0, 00:25:22.544 "data_size": 63488 00:25:22.544 }, 00:25:22.544 { 00:25:22.544 "name": "BaseBdev3", 00:25:22.544 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:22.544 "is_configured": true, 00:25:22.544 "data_offset": 2048, 00:25:22.544 "data_size": 63488 00:25:22.544 } 00:25:22.544 ] 00:25:22.544 }' 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.544 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.803 [2024-11-08 17:12:59.346120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.803 "name": "Existed_Raid", 00:25:22.803 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:22.803 "strip_size_kb": 64, 00:25:22.803 "state": "configuring", 00:25:22.803 "raid_level": "raid5f", 00:25:22.803 "superblock": true, 00:25:22.803 "num_base_bdevs": 3, 00:25:22.803 "num_base_bdevs_discovered": 2, 00:25:22.803 "num_base_bdevs_operational": 3, 00:25:22.803 "base_bdevs_list": [ 00:25:22.803 { 00:25:22.803 "name": null, 00:25:22.803 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:22.803 "is_configured": false, 00:25:22.803 "data_offset": 0, 00:25:22.803 "data_size": 63488 00:25:22.803 }, 00:25:22.803 { 00:25:22.803 "name": "BaseBdev2", 00:25:22.803 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:22.803 "is_configured": true, 00:25:22.803 "data_offset": 2048, 00:25:22.803 "data_size": 63488 00:25:22.803 }, 00:25:22.803 { 00:25:22.803 "name": "BaseBdev3", 00:25:22.803 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:22.803 "is_configured": true, 00:25:22.803 "data_offset": 2048, 00:25:22.803 "data_size": 63488 00:25:22.803 } 00:25:22.803 ] 00:25:22.803 }' 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.803 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15d39e97-b349-421a-bc00-c599998ff5e1 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.065 [2024-11-08 17:12:59.758874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:23.065 [2024-11-08 17:12:59.759078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:23.065 [2024-11-08 17:12:59.759094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:23.065 [2024-11-08 17:12:59.759356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:23.065 NewBaseBdev 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.065 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.065 [2024-11-08 17:12:59.763155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:23.065 [2024-11-08 17:12:59.763178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:23.065 [2024-11-08 17:12:59.763327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.066 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.066 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:23.066 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.066 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.330 [ 00:25:23.330 { 00:25:23.330 "name": "NewBaseBdev", 00:25:23.330 "aliases": [ 00:25:23.330 "15d39e97-b349-421a-bc00-c599998ff5e1" 00:25:23.330 ], 00:25:23.330 "product_name": "Malloc disk", 00:25:23.330 "block_size": 512, 00:25:23.330 "num_blocks": 65536, 00:25:23.330 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:23.330 "assigned_rate_limits": { 00:25:23.330 "rw_ios_per_sec": 0, 00:25:23.330 "rw_mbytes_per_sec": 0, 00:25:23.330 "r_mbytes_per_sec": 0, 00:25:23.330 "w_mbytes_per_sec": 0 00:25:23.330 }, 00:25:23.330 "claimed": true, 00:25:23.330 "claim_type": "exclusive_write", 00:25:23.330 "zoned": false, 00:25:23.330 "supported_io_types": { 00:25:23.330 "read": true, 00:25:23.330 "write": true, 00:25:23.330 "unmap": true, 00:25:23.330 "flush": true, 00:25:23.330 "reset": true, 00:25:23.330 "nvme_admin": false, 00:25:23.330 "nvme_io": false, 00:25:23.330 "nvme_io_md": false, 00:25:23.330 "write_zeroes": true, 00:25:23.330 "zcopy": true, 00:25:23.330 "get_zone_info": false, 00:25:23.330 "zone_management": false, 00:25:23.330 "zone_append": false, 00:25:23.330 "compare": false, 00:25:23.330 "compare_and_write": false, 00:25:23.330 "abort": true, 00:25:23.330 "seek_hole": false, 00:25:23.330 "seek_data": false, 00:25:23.330 "copy": true, 00:25:23.330 "nvme_iov_md": false 00:25:23.330 }, 00:25:23.330 "memory_domains": [ 00:25:23.330 { 00:25:23.330 "dma_device_id": "system", 00:25:23.330 "dma_device_type": 1 00:25:23.330 }, 00:25:23.330 { 00:25:23.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.330 "dma_device_type": 2 00:25:23.330 } 00:25:23.330 ], 00:25:23.330 "driver_specific": {} 00:25:23.330 } 00:25:23.330 ] 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.330 "name": "Existed_Raid", 00:25:23.330 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:23.330 "strip_size_kb": 64, 00:25:23.330 "state": "online", 00:25:23.330 "raid_level": "raid5f", 00:25:23.330 "superblock": true, 00:25:23.330 "num_base_bdevs": 3, 00:25:23.330 "num_base_bdevs_discovered": 3, 00:25:23.330 "num_base_bdevs_operational": 3, 00:25:23.330 "base_bdevs_list": [ 00:25:23.330 { 00:25:23.330 "name": "NewBaseBdev", 00:25:23.330 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:23.330 "is_configured": true, 00:25:23.330 "data_offset": 2048, 00:25:23.330 "data_size": 63488 00:25:23.330 }, 00:25:23.330 { 00:25:23.330 "name": "BaseBdev2", 00:25:23.330 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:23.330 "is_configured": true, 00:25:23.330 "data_offset": 2048, 00:25:23.330 "data_size": 63488 00:25:23.330 }, 00:25:23.330 { 00:25:23.330 "name": "BaseBdev3", 00:25:23.330 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:23.330 "is_configured": true, 00:25:23.330 "data_offset": 2048, 00:25:23.330 "data_size": 63488 00:25:23.330 } 00:25:23.330 ] 00:25:23.330 }' 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.330 17:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:23.588 [2024-11-08 17:13:00.099860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:23.588 "name": "Existed_Raid", 00:25:23.588 "aliases": [ 00:25:23.588 "b4e67192-50c8-49fc-a748-069d07417417" 00:25:23.588 ], 00:25:23.588 "product_name": "Raid Volume", 00:25:23.588 "block_size": 512, 00:25:23.588 "num_blocks": 126976, 00:25:23.588 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:23.588 "assigned_rate_limits": { 00:25:23.588 "rw_ios_per_sec": 0, 00:25:23.588 "rw_mbytes_per_sec": 0, 00:25:23.588 "r_mbytes_per_sec": 0, 00:25:23.588 "w_mbytes_per_sec": 0 00:25:23.588 }, 00:25:23.588 "claimed": false, 00:25:23.588 "zoned": false, 00:25:23.588 "supported_io_types": { 00:25:23.588 "read": true, 00:25:23.588 "write": true, 00:25:23.588 "unmap": false, 00:25:23.588 "flush": false, 00:25:23.588 "reset": true, 00:25:23.588 "nvme_admin": false, 00:25:23.588 "nvme_io": false, 00:25:23.588 "nvme_io_md": false, 00:25:23.588 "write_zeroes": true, 00:25:23.588 "zcopy": false, 00:25:23.588 "get_zone_info": false, 00:25:23.588 "zone_management": false, 00:25:23.588 "zone_append": false, 00:25:23.588 "compare": false, 00:25:23.588 "compare_and_write": false, 00:25:23.588 "abort": false, 00:25:23.588 "seek_hole": false, 00:25:23.588 "seek_data": false, 00:25:23.588 "copy": false, 00:25:23.588 "nvme_iov_md": false 00:25:23.588 }, 00:25:23.588 "driver_specific": { 00:25:23.588 "raid": { 00:25:23.588 "uuid": "b4e67192-50c8-49fc-a748-069d07417417", 00:25:23.588 "strip_size_kb": 64, 00:25:23.588 "state": "online", 00:25:23.588 "raid_level": "raid5f", 00:25:23.588 "superblock": true, 00:25:23.588 "num_base_bdevs": 3, 00:25:23.588 "num_base_bdevs_discovered": 3, 00:25:23.588 "num_base_bdevs_operational": 3, 00:25:23.588 "base_bdevs_list": [ 00:25:23.588 { 00:25:23.588 "name": "NewBaseBdev", 00:25:23.588 "uuid": "15d39e97-b349-421a-bc00-c599998ff5e1", 00:25:23.588 "is_configured": true, 00:25:23.588 "data_offset": 2048, 00:25:23.588 "data_size": 63488 00:25:23.588 }, 00:25:23.588 { 00:25:23.588 "name": "BaseBdev2", 00:25:23.588 "uuid": "70874a3c-d9da-4016-85f0-d6e9356c88a8", 00:25:23.588 "is_configured": true, 00:25:23.588 "data_offset": 2048, 00:25:23.588 "data_size": 63488 00:25:23.588 }, 00:25:23.588 { 00:25:23.588 "name": "BaseBdev3", 00:25:23.588 "uuid": "b1d00930-e314-45dd-b8d1-cff07b78c859", 00:25:23.588 "is_configured": true, 00:25:23.588 "data_offset": 2048, 00:25:23.588 "data_size": 63488 00:25:23.588 } 00:25:23.588 ] 00:25:23.588 } 00:25:23.588 } 00:25:23.588 }' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:23.588 BaseBdev2 00:25:23.588 BaseBdev3' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.588 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.589 [2024-11-08 17:13:00.291656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:23.589 [2024-11-08 17:13:00.291689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:23.589 [2024-11-08 17:13:00.291783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.589 [2024-11-08 17:13:00.292081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:23.589 [2024-11-08 17:13:00.292095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78956 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78956 ']' 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 78956 00:25:23.589 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:25:23.847 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:23.847 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78956 00:25:23.847 killing process with pid 78956 00:25:23.847 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:23.847 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:23.847 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78956' 00:25:23.847 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 78956 00:25:23.847 [2024-11-08 17:13:00.320649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.847 17:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 78956 00:25:23.847 [2024-11-08 17:13:00.520417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:24.781 17:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:24.781 00:25:24.781 real 0m7.955s 00:25:24.781 user 0m12.589s 00:25:24.781 sys 0m1.324s 00:25:24.781 17:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:24.781 17:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.781 ************************************ 00:25:24.781 END TEST raid5f_state_function_test_sb 00:25:24.781 ************************************ 00:25:24.781 17:13:01 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:25:24.781 17:13:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:24.781 17:13:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:24.781 17:13:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:24.781 ************************************ 00:25:24.781 START TEST raid5f_superblock_test 00:25:24.781 ************************************ 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79549 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79549 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 79549 ']' 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:24.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:24.781 17:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.781 [2024-11-08 17:13:01.402064] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:25:24.781 [2024-11-08 17:13:01.402182] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79549 ] 00:25:25.039 [2024-11-08 17:13:01.564135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.039 [2024-11-08 17:13:01.682854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.297 [2024-11-08 17:13:01.831187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.297 [2024-11-08 17:13:01.831266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.555 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.813 malloc1 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.813 [2024-11-08 17:13:02.289297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:25.813 [2024-11-08 17:13:02.289372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.813 [2024-11-08 17:13:02.289396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:25.813 [2024-11-08 17:13:02.289409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.813 [2024-11-08 17:13:02.291739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.813 [2024-11-08 17:13:02.291789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:25.813 pt1 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.813 malloc2 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.813 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.813 [2024-11-08 17:13:02.327649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:25.813 [2024-11-08 17:13:02.327702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.813 [2024-11-08 17:13:02.327723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:25.813 [2024-11-08 17:13:02.327732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.813 [2024-11-08 17:13:02.329952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.814 [2024-11-08 17:13:02.329986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:25.814 pt2 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.814 malloc3 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.814 [2024-11-08 17:13:02.385308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:25.814 [2024-11-08 17:13:02.385367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.814 [2024-11-08 17:13:02.385391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:25.814 [2024-11-08 17:13:02.385402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.814 [2024-11-08 17:13:02.387667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.814 [2024-11-08 17:13:02.387706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:25.814 pt3 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.814 [2024-11-08 17:13:02.393368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:25.814 [2024-11-08 17:13:02.395351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:25.814 [2024-11-08 17:13:02.395528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:25.814 [2024-11-08 17:13:02.395704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:25.814 [2024-11-08 17:13:02.395722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:25.814 [2024-11-08 17:13:02.395989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:25.814 [2024-11-08 17:13:02.399876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:25.814 [2024-11-08 17:13:02.399894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:25.814 [2024-11-08 17:13:02.400079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.814 "name": "raid_bdev1", 00:25:25.814 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:25.814 "strip_size_kb": 64, 00:25:25.814 "state": "online", 00:25:25.814 "raid_level": "raid5f", 00:25:25.814 "superblock": true, 00:25:25.814 "num_base_bdevs": 3, 00:25:25.814 "num_base_bdevs_discovered": 3, 00:25:25.814 "num_base_bdevs_operational": 3, 00:25:25.814 "base_bdevs_list": [ 00:25:25.814 { 00:25:25.814 "name": "pt1", 00:25:25.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:25.814 "is_configured": true, 00:25:25.814 "data_offset": 2048, 00:25:25.814 "data_size": 63488 00:25:25.814 }, 00:25:25.814 { 00:25:25.814 "name": "pt2", 00:25:25.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:25.814 "is_configured": true, 00:25:25.814 "data_offset": 2048, 00:25:25.814 "data_size": 63488 00:25:25.814 }, 00:25:25.814 { 00:25:25.814 "name": "pt3", 00:25:25.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:25.814 "is_configured": true, 00:25:25.814 "data_offset": 2048, 00:25:25.814 "data_size": 63488 00:25:25.814 } 00:25:25.814 ] 00:25:25.814 }' 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.814 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.073 [2024-11-08 17:13:02.744719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:26.073 "name": "raid_bdev1", 00:25:26.073 "aliases": [ 00:25:26.073 "61460dda-d733-4592-b427-c34d0f466054" 00:25:26.073 ], 00:25:26.073 "product_name": "Raid Volume", 00:25:26.073 "block_size": 512, 00:25:26.073 "num_blocks": 126976, 00:25:26.073 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:26.073 "assigned_rate_limits": { 00:25:26.073 "rw_ios_per_sec": 0, 00:25:26.073 "rw_mbytes_per_sec": 0, 00:25:26.073 "r_mbytes_per_sec": 0, 00:25:26.073 "w_mbytes_per_sec": 0 00:25:26.073 }, 00:25:26.073 "claimed": false, 00:25:26.073 "zoned": false, 00:25:26.073 "supported_io_types": { 00:25:26.073 "read": true, 00:25:26.073 "write": true, 00:25:26.073 "unmap": false, 00:25:26.073 "flush": false, 00:25:26.073 "reset": true, 00:25:26.073 "nvme_admin": false, 00:25:26.073 "nvme_io": false, 00:25:26.073 "nvme_io_md": false, 00:25:26.073 "write_zeroes": true, 00:25:26.073 "zcopy": false, 00:25:26.073 "get_zone_info": false, 00:25:26.073 "zone_management": false, 00:25:26.073 "zone_append": false, 00:25:26.073 "compare": false, 00:25:26.073 "compare_and_write": false, 00:25:26.073 "abort": false, 00:25:26.073 "seek_hole": false, 00:25:26.073 "seek_data": false, 00:25:26.073 "copy": false, 00:25:26.073 "nvme_iov_md": false 00:25:26.073 }, 00:25:26.073 "driver_specific": { 00:25:26.073 "raid": { 00:25:26.073 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:26.073 "strip_size_kb": 64, 00:25:26.073 "state": "online", 00:25:26.073 "raid_level": "raid5f", 00:25:26.073 "superblock": true, 00:25:26.073 "num_base_bdevs": 3, 00:25:26.073 "num_base_bdevs_discovered": 3, 00:25:26.073 "num_base_bdevs_operational": 3, 00:25:26.073 "base_bdevs_list": [ 00:25:26.073 { 00:25:26.073 "name": "pt1", 00:25:26.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:26.073 "is_configured": true, 00:25:26.073 "data_offset": 2048, 00:25:26.073 "data_size": 63488 00:25:26.073 }, 00:25:26.073 { 00:25:26.073 "name": "pt2", 00:25:26.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:26.073 "is_configured": true, 00:25:26.073 "data_offset": 2048, 00:25:26.073 "data_size": 63488 00:25:26.073 }, 00:25:26.073 { 00:25:26.073 "name": "pt3", 00:25:26.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:26.073 "is_configured": true, 00:25:26.073 "data_offset": 2048, 00:25:26.073 "data_size": 63488 00:25:26.073 } 00:25:26.073 ] 00:25:26.073 } 00:25:26.073 } 00:25:26.073 }' 00:25:26.073 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:26.332 pt2 00:25:26.332 pt3' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.332 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.333 [2024-11-08 17:13:02.945298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=61460dda-d733-4592-b427-c34d0f466054 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 61460dda-d733-4592-b427-c34d0f466054 ']' 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.333 [2024-11-08 17:13:02.972500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.333 [2024-11-08 17:13:02.972616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:26.333 [2024-11-08 17:13:02.972742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.333 [2024-11-08 17:13:02.972889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.333 [2024-11-08 17:13:02.972960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.333 17:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:26.333 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.590 [2024-11-08 17:13:03.080576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:26.590 [2024-11-08 17:13:03.082615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:26.590 [2024-11-08 17:13:03.082771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:26.590 [2024-11-08 17:13:03.082833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:26.590 [2024-11-08 17:13:03.082882] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:26.590 [2024-11-08 17:13:03.082902] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:26.590 [2024-11-08 17:13:03.082920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.590 [2024-11-08 17:13:03.082931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:26.590 request: 00:25:26.590 { 00:25:26.590 "name": "raid_bdev1", 00:25:26.590 "raid_level": "raid5f", 00:25:26.590 "base_bdevs": [ 00:25:26.590 "malloc1", 00:25:26.590 "malloc2", 00:25:26.590 "malloc3" 00:25:26.590 ], 00:25:26.590 "strip_size_kb": 64, 00:25:26.590 "superblock": false, 00:25:26.590 "method": "bdev_raid_create", 00:25:26.590 "req_id": 1 00:25:26.590 } 00:25:26.590 Got JSON-RPC error response 00:25:26.590 response: 00:25:26.590 { 00:25:26.590 "code": -17, 00:25:26.590 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:26.590 } 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.590 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.590 [2024-11-08 17:13:03.124564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:26.590 [2024-11-08 17:13:03.124633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.591 [2024-11-08 17:13:03.124654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:26.591 [2024-11-08 17:13:03.124663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.591 [2024-11-08 17:13:03.126947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.591 [2024-11-08 17:13:03.126983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:26.591 [2024-11-08 17:13:03.127069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:26.591 [2024-11-08 17:13:03.127115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:26.591 pt1 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.591 "name": "raid_bdev1", 00:25:26.591 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:26.591 "strip_size_kb": 64, 00:25:26.591 "state": "configuring", 00:25:26.591 "raid_level": "raid5f", 00:25:26.591 "superblock": true, 00:25:26.591 "num_base_bdevs": 3, 00:25:26.591 "num_base_bdevs_discovered": 1, 00:25:26.591 "num_base_bdevs_operational": 3, 00:25:26.591 "base_bdevs_list": [ 00:25:26.591 { 00:25:26.591 "name": "pt1", 00:25:26.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:26.591 "is_configured": true, 00:25:26.591 "data_offset": 2048, 00:25:26.591 "data_size": 63488 00:25:26.591 }, 00:25:26.591 { 00:25:26.591 "name": null, 00:25:26.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:26.591 "is_configured": false, 00:25:26.591 "data_offset": 2048, 00:25:26.591 "data_size": 63488 00:25:26.591 }, 00:25:26.591 { 00:25:26.591 "name": null, 00:25:26.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:26.591 "is_configured": false, 00:25:26.591 "data_offset": 2048, 00:25:26.591 "data_size": 63488 00:25:26.591 } 00:25:26.591 ] 00:25:26.591 }' 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.591 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.849 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:25:26.849 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.850 [2024-11-08 17:13:03.464642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:26.850 [2024-11-08 17:13:03.464858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.850 [2024-11-08 17:13:03.464887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:26.850 [2024-11-08 17:13:03.464897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.850 [2024-11-08 17:13:03.465312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.850 [2024-11-08 17:13:03.465342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:26.850 [2024-11-08 17:13:03.465418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:26.850 [2024-11-08 17:13:03.465438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:26.850 pt2 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.850 [2024-11-08 17:13:03.472639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.850 "name": "raid_bdev1", 00:25:26.850 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:26.850 "strip_size_kb": 64, 00:25:26.850 "state": "configuring", 00:25:26.850 "raid_level": "raid5f", 00:25:26.850 "superblock": true, 00:25:26.850 "num_base_bdevs": 3, 00:25:26.850 "num_base_bdevs_discovered": 1, 00:25:26.850 "num_base_bdevs_operational": 3, 00:25:26.850 "base_bdevs_list": [ 00:25:26.850 { 00:25:26.850 "name": "pt1", 00:25:26.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:26.850 "is_configured": true, 00:25:26.850 "data_offset": 2048, 00:25:26.850 "data_size": 63488 00:25:26.850 }, 00:25:26.850 { 00:25:26.850 "name": null, 00:25:26.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:26.850 "is_configured": false, 00:25:26.850 "data_offset": 0, 00:25:26.850 "data_size": 63488 00:25:26.850 }, 00:25:26.850 { 00:25:26.850 "name": null, 00:25:26.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:26.850 "is_configured": false, 00:25:26.850 "data_offset": 2048, 00:25:26.850 "data_size": 63488 00:25:26.850 } 00:25:26.850 ] 00:25:26.850 }' 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.850 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.108 [2024-11-08 17:13:03.780729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:27.108 [2024-11-08 17:13:03.780839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.108 [2024-11-08 17:13:03.780861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:27.108 [2024-11-08 17:13:03.780873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.108 [2024-11-08 17:13:03.781381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.108 [2024-11-08 17:13:03.781414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:27.108 [2024-11-08 17:13:03.781504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:27.108 [2024-11-08 17:13:03.781530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.108 pt2 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.108 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.108 [2024-11-08 17:13:03.788704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:27.108 [2024-11-08 17:13:03.788889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.108 [2024-11-08 17:13:03.788912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:27.108 [2024-11-08 17:13:03.788923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.108 [2024-11-08 17:13:03.789321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.108 [2024-11-08 17:13:03.789348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:27.108 [2024-11-08 17:13:03.789412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:27.108 [2024-11-08 17:13:03.789432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:27.108 [2024-11-08 17:13:03.789558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:27.108 [2024-11-08 17:13:03.789570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:27.108 [2024-11-08 17:13:03.789828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:27.108 [2024-11-08 17:13:03.793679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:27.109 [2024-11-08 17:13:03.793785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:27.109 [2024-11-08 17:13:03.794037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.109 pt3 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.109 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.366 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.366 "name": "raid_bdev1", 00:25:27.366 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:27.366 "strip_size_kb": 64, 00:25:27.366 "state": "online", 00:25:27.366 "raid_level": "raid5f", 00:25:27.366 "superblock": true, 00:25:27.366 "num_base_bdevs": 3, 00:25:27.366 "num_base_bdevs_discovered": 3, 00:25:27.366 "num_base_bdevs_operational": 3, 00:25:27.366 "base_bdevs_list": [ 00:25:27.366 { 00:25:27.366 "name": "pt1", 00:25:27.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.367 "is_configured": true, 00:25:27.367 "data_offset": 2048, 00:25:27.367 "data_size": 63488 00:25:27.367 }, 00:25:27.367 { 00:25:27.367 "name": "pt2", 00:25:27.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.367 "is_configured": true, 00:25:27.367 "data_offset": 2048, 00:25:27.367 "data_size": 63488 00:25:27.367 }, 00:25:27.367 { 00:25:27.367 "name": "pt3", 00:25:27.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:27.367 "is_configured": true, 00:25:27.367 "data_offset": 2048, 00:25:27.367 "data_size": 63488 00:25:27.367 } 00:25:27.367 ] 00:25:27.367 }' 00:25:27.367 17:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.367 17:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.626 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:27.626 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:27.626 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.627 [2024-11-08 17:13:04.138718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:27.627 "name": "raid_bdev1", 00:25:27.627 "aliases": [ 00:25:27.627 "61460dda-d733-4592-b427-c34d0f466054" 00:25:27.627 ], 00:25:27.627 "product_name": "Raid Volume", 00:25:27.627 "block_size": 512, 00:25:27.627 "num_blocks": 126976, 00:25:27.627 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:27.627 "assigned_rate_limits": { 00:25:27.627 "rw_ios_per_sec": 0, 00:25:27.627 "rw_mbytes_per_sec": 0, 00:25:27.627 "r_mbytes_per_sec": 0, 00:25:27.627 "w_mbytes_per_sec": 0 00:25:27.627 }, 00:25:27.627 "claimed": false, 00:25:27.627 "zoned": false, 00:25:27.627 "supported_io_types": { 00:25:27.627 "read": true, 00:25:27.627 "write": true, 00:25:27.627 "unmap": false, 00:25:27.627 "flush": false, 00:25:27.627 "reset": true, 00:25:27.627 "nvme_admin": false, 00:25:27.627 "nvme_io": false, 00:25:27.627 "nvme_io_md": false, 00:25:27.627 "write_zeroes": true, 00:25:27.627 "zcopy": false, 00:25:27.627 "get_zone_info": false, 00:25:27.627 "zone_management": false, 00:25:27.627 "zone_append": false, 00:25:27.627 "compare": false, 00:25:27.627 "compare_and_write": false, 00:25:27.627 "abort": false, 00:25:27.627 "seek_hole": false, 00:25:27.627 "seek_data": false, 00:25:27.627 "copy": false, 00:25:27.627 "nvme_iov_md": false 00:25:27.627 }, 00:25:27.627 "driver_specific": { 00:25:27.627 "raid": { 00:25:27.627 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:27.627 "strip_size_kb": 64, 00:25:27.627 "state": "online", 00:25:27.627 "raid_level": "raid5f", 00:25:27.627 "superblock": true, 00:25:27.627 "num_base_bdevs": 3, 00:25:27.627 "num_base_bdevs_discovered": 3, 00:25:27.627 "num_base_bdevs_operational": 3, 00:25:27.627 "base_bdevs_list": [ 00:25:27.627 { 00:25:27.627 "name": "pt1", 00:25:27.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.627 "is_configured": true, 00:25:27.627 "data_offset": 2048, 00:25:27.627 "data_size": 63488 00:25:27.627 }, 00:25:27.627 { 00:25:27.627 "name": "pt2", 00:25:27.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.627 "is_configured": true, 00:25:27.627 "data_offset": 2048, 00:25:27.627 "data_size": 63488 00:25:27.627 }, 00:25:27.627 { 00:25:27.627 "name": "pt3", 00:25:27.627 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:27.627 "is_configured": true, 00:25:27.627 "data_offset": 2048, 00:25:27.627 "data_size": 63488 00:25:27.627 } 00:25:27.627 ] 00:25:27.627 } 00:25:27.627 } 00:25:27.627 }' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:27.627 pt2 00:25:27.627 pt3' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.627 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.887 [2024-11-08 17:13:04.354723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 61460dda-d733-4592-b427-c34d0f466054 '!=' 61460dda-d733-4592-b427-c34d0f466054 ']' 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.887 [2024-11-08 17:13:04.386613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.887 "name": "raid_bdev1", 00:25:27.887 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:27.887 "strip_size_kb": 64, 00:25:27.887 "state": "online", 00:25:27.887 "raid_level": "raid5f", 00:25:27.887 "superblock": true, 00:25:27.887 "num_base_bdevs": 3, 00:25:27.887 "num_base_bdevs_discovered": 2, 00:25:27.887 "num_base_bdevs_operational": 2, 00:25:27.887 "base_bdevs_list": [ 00:25:27.887 { 00:25:27.887 "name": null, 00:25:27.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.887 "is_configured": false, 00:25:27.887 "data_offset": 0, 00:25:27.887 "data_size": 63488 00:25:27.887 }, 00:25:27.887 { 00:25:27.887 "name": "pt2", 00:25:27.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.887 "is_configured": true, 00:25:27.887 "data_offset": 2048, 00:25:27.887 "data_size": 63488 00:25:27.887 }, 00:25:27.887 { 00:25:27.887 "name": "pt3", 00:25:27.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:27.887 "is_configured": true, 00:25:27.887 "data_offset": 2048, 00:25:27.887 "data_size": 63488 00:25:27.887 } 00:25:27.887 ] 00:25:27.887 }' 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.887 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 [2024-11-08 17:13:04.710592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.147 [2024-11-08 17:13:04.710625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.147 [2024-11-08 17:13:04.710712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.147 [2024-11-08 17:13:04.710792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.147 [2024-11-08 17:13:04.710808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 [2024-11-08 17:13:04.770567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:28.147 [2024-11-08 17:13:04.770733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.147 [2024-11-08 17:13:04.770773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:28.147 [2024-11-08 17:13:04.770786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.147 [2024-11-08 17:13:04.773182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.147 [2024-11-08 17:13:04.773221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:28.147 [2024-11-08 17:13:04.773301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:28.147 [2024-11-08 17:13:04.773351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:28.147 pt2 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.147 "name": "raid_bdev1", 00:25:28.147 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:28.147 "strip_size_kb": 64, 00:25:28.147 "state": "configuring", 00:25:28.147 "raid_level": "raid5f", 00:25:28.147 "superblock": true, 00:25:28.147 "num_base_bdevs": 3, 00:25:28.147 "num_base_bdevs_discovered": 1, 00:25:28.147 "num_base_bdevs_operational": 2, 00:25:28.147 "base_bdevs_list": [ 00:25:28.147 { 00:25:28.147 "name": null, 00:25:28.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.147 "is_configured": false, 00:25:28.147 "data_offset": 2048, 00:25:28.147 "data_size": 63488 00:25:28.147 }, 00:25:28.147 { 00:25:28.147 "name": "pt2", 00:25:28.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:28.147 "is_configured": true, 00:25:28.147 "data_offset": 2048, 00:25:28.147 "data_size": 63488 00:25:28.147 }, 00:25:28.147 { 00:25:28.147 "name": null, 00:25:28.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:28.147 "is_configured": false, 00:25:28.147 "data_offset": 2048, 00:25:28.147 "data_size": 63488 00:25:28.147 } 00:25:28.147 ] 00:25:28.147 }' 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.147 17:13:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.408 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:25:28.408 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:28.408 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.409 [2024-11-08 17:13:05.090685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:28.409 [2024-11-08 17:13:05.090772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.409 [2024-11-08 17:13:05.090798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:28.409 [2024-11-08 17:13:05.090809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.409 [2024-11-08 17:13:05.091316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.409 [2024-11-08 17:13:05.091340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:28.409 [2024-11-08 17:13:05.091426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:28.409 [2024-11-08 17:13:05.091458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:28.409 [2024-11-08 17:13:05.091577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:28.409 [2024-11-08 17:13:05.091589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:28.409 [2024-11-08 17:13:05.091863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:28.409 [2024-11-08 17:13:05.095515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:28.409 [2024-11-08 17:13:05.095533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:28.409 [2024-11-08 17:13:05.095826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:28.409 pt3 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.409 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.668 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.668 "name": "raid_bdev1", 00:25:28.668 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:28.668 "strip_size_kb": 64, 00:25:28.668 "state": "online", 00:25:28.668 "raid_level": "raid5f", 00:25:28.668 "superblock": true, 00:25:28.668 "num_base_bdevs": 3, 00:25:28.668 "num_base_bdevs_discovered": 2, 00:25:28.668 "num_base_bdevs_operational": 2, 00:25:28.668 "base_bdevs_list": [ 00:25:28.668 { 00:25:28.668 "name": null, 00:25:28.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.668 "is_configured": false, 00:25:28.668 "data_offset": 2048, 00:25:28.668 "data_size": 63488 00:25:28.668 }, 00:25:28.668 { 00:25:28.668 "name": "pt2", 00:25:28.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:28.668 "is_configured": true, 00:25:28.668 "data_offset": 2048, 00:25:28.668 "data_size": 63488 00:25:28.668 }, 00:25:28.668 { 00:25:28.668 "name": "pt3", 00:25:28.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:28.668 "is_configured": true, 00:25:28.668 "data_offset": 2048, 00:25:28.668 "data_size": 63488 00:25:28.668 } 00:25:28.668 ] 00:25:28.668 }' 00:25:28.668 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.668 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.927 [2024-11-08 17:13:05.432291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.927 [2024-11-08 17:13:05.432327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.927 [2024-11-08 17:13:05.432412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.927 [2024-11-08 17:13:05.432486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.927 [2024-11-08 17:13:05.432497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.927 [2024-11-08 17:13:05.492318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:28.927 [2024-11-08 17:13:05.492477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.927 [2024-11-08 17:13:05.492504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:28.927 [2024-11-08 17:13:05.492514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.927 [2024-11-08 17:13:05.494974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.927 [2024-11-08 17:13:05.495005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:28.927 [2024-11-08 17:13:05.495088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:28.927 [2024-11-08 17:13:05.495130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:28.927 [2024-11-08 17:13:05.495257] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:28.927 [2024-11-08 17:13:05.495272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.927 [2024-11-08 17:13:05.495289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:28.927 [2024-11-08 17:13:05.495340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:28.927 pt1 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.927 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.927 "name": "raid_bdev1", 00:25:28.927 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:28.927 "strip_size_kb": 64, 00:25:28.927 "state": "configuring", 00:25:28.927 "raid_level": "raid5f", 00:25:28.927 "superblock": true, 00:25:28.927 "num_base_bdevs": 3, 00:25:28.927 "num_base_bdevs_discovered": 1, 00:25:28.927 "num_base_bdevs_operational": 2, 00:25:28.927 "base_bdevs_list": [ 00:25:28.927 { 00:25:28.927 "name": null, 00:25:28.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.927 "is_configured": false, 00:25:28.928 "data_offset": 2048, 00:25:28.928 "data_size": 63488 00:25:28.928 }, 00:25:28.928 { 00:25:28.928 "name": "pt2", 00:25:28.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:28.928 "is_configured": true, 00:25:28.928 "data_offset": 2048, 00:25:28.928 "data_size": 63488 00:25:28.928 }, 00:25:28.928 { 00:25:28.928 "name": null, 00:25:28.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:28.928 "is_configured": false, 00:25:28.928 "data_offset": 2048, 00:25:28.928 "data_size": 63488 00:25:28.928 } 00:25:28.928 ] 00:25:28.928 }' 00:25:28.928 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.928 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.185 [2024-11-08 17:13:05.884421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:29.185 [2024-11-08 17:13:05.884489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.185 [2024-11-08 17:13:05.884512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:29.185 [2024-11-08 17:13:05.884523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.185 [2024-11-08 17:13:05.885032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.185 [2024-11-08 17:13:05.885053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:29.185 [2024-11-08 17:13:05.885137] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:29.185 [2024-11-08 17:13:05.885164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:29.185 [2024-11-08 17:13:05.885287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:29.185 [2024-11-08 17:13:05.885296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:29.185 [2024-11-08 17:13:05.885549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:29.185 [2024-11-08 17:13:05.889239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:29.185 [2024-11-08 17:13:05.889261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:29.185 [2024-11-08 17:13:05.889502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.185 pt3 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:29.185 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.186 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.443 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.443 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.443 "name": "raid_bdev1", 00:25:29.443 "uuid": "61460dda-d733-4592-b427-c34d0f466054", 00:25:29.443 "strip_size_kb": 64, 00:25:29.443 "state": "online", 00:25:29.443 "raid_level": "raid5f", 00:25:29.443 "superblock": true, 00:25:29.443 "num_base_bdevs": 3, 00:25:29.443 "num_base_bdevs_discovered": 2, 00:25:29.443 "num_base_bdevs_operational": 2, 00:25:29.443 "base_bdevs_list": [ 00:25:29.443 { 00:25:29.443 "name": null, 00:25:29.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.443 "is_configured": false, 00:25:29.443 "data_offset": 2048, 00:25:29.443 "data_size": 63488 00:25:29.443 }, 00:25:29.443 { 00:25:29.443 "name": "pt2", 00:25:29.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:29.443 "is_configured": true, 00:25:29.443 "data_offset": 2048, 00:25:29.443 "data_size": 63488 00:25:29.443 }, 00:25:29.443 { 00:25:29.443 "name": "pt3", 00:25:29.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:29.443 "is_configured": true, 00:25:29.443 "data_offset": 2048, 00:25:29.443 "data_size": 63488 00:25:29.443 } 00:25:29.443 ] 00:25:29.443 }' 00:25:29.443 17:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.443 17:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:29.743 [2024-11-08 17:13:06.246062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:29.743 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 61460dda-d733-4592-b427-c34d0f466054 '!=' 61460dda-d733-4592-b427-c34d0f466054 ']' 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79549 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 79549 ']' 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 79549 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79549 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:29.744 killing process with pid 79549 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79549' 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 79549 00:25:29.744 17:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 79549 00:25:29.744 [2024-11-08 17:13:06.302402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:29.744 [2024-11-08 17:13:06.302509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:29.744 [2024-11-08 17:13:06.302576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:29.744 [2024-11-08 17:13:06.302627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:30.002 [2024-11-08 17:13:06.570919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:30.935 ************************************ 00:25:30.935 END TEST raid5f_superblock_test 00:25:30.935 ************************************ 00:25:30.935 17:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:30.935 00:25:30.935 real 0m5.985s 00:25:30.935 user 0m9.273s 00:25:30.935 sys 0m0.995s 00:25:30.935 17:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:30.935 17:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.935 17:13:07 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:25:30.935 17:13:07 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:25:30.935 17:13:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:30.935 17:13:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:30.935 17:13:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:30.935 ************************************ 00:25:30.935 START TEST raid5f_rebuild_test 00:25:30.935 ************************************ 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79976 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79976 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 79976 ']' 00:25:30.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:30.935 17:13:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.935 [2024-11-08 17:13:07.458355] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:25:30.935 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:30.935 Zero copy mechanism will not be used. 00:25:30.935 [2024-11-08 17:13:07.458625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79976 ] 00:25:30.935 [2024-11-08 17:13:07.614136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.193 [2024-11-08 17:13:07.733337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.193 [2024-11-08 17:13:07.881262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.193 [2024-11-08 17:13:07.881308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.759 BaseBdev1_malloc 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.759 [2024-11-08 17:13:08.367239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:31.759 [2024-11-08 17:13:08.367317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.759 [2024-11-08 17:13:08.367342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:31.759 [2024-11-08 17:13:08.367355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.759 [2024-11-08 17:13:08.369704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.759 [2024-11-08 17:13:08.369743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:31.759 BaseBdev1 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.759 BaseBdev2_malloc 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.759 [2024-11-08 17:13:08.405841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:31.759 [2024-11-08 17:13:08.405898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.759 [2024-11-08 17:13:08.405918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:31.759 [2024-11-08 17:13:08.405931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.759 [2024-11-08 17:13:08.408143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.759 [2024-11-08 17:13:08.408181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:31.759 BaseBdev2 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.759 BaseBdev3_malloc 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.759 [2024-11-08 17:13:08.456184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:31.759 [2024-11-08 17:13:08.456242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.759 [2024-11-08 17:13:08.456266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:31.759 [2024-11-08 17:13:08.456279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.759 [2024-11-08 17:13:08.458532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.759 [2024-11-08 17:13:08.458571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:31.759 BaseBdev3 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.759 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.018 spare_malloc 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.018 spare_delay 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.018 [2024-11-08 17:13:08.506957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:32.018 [2024-11-08 17:13:08.507007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.018 [2024-11-08 17:13:08.507023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:32.018 [2024-11-08 17:13:08.507033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.018 [2024-11-08 17:13:08.509272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.018 [2024-11-08 17:13:08.509310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:32.018 spare 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.018 [2024-11-08 17:13:08.515030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:32.018 [2024-11-08 17:13:08.516987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:32.018 [2024-11-08 17:13:08.517052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:32.018 [2024-11-08 17:13:08.517138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:32.018 [2024-11-08 17:13:08.517150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:32.018 [2024-11-08 17:13:08.517424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:32.018 [2024-11-08 17:13:08.521232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:32.018 [2024-11-08 17:13:08.521354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:32.018 [2024-11-08 17:13:08.521552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.018 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.018 "name": "raid_bdev1", 00:25:32.018 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:32.018 "strip_size_kb": 64, 00:25:32.018 "state": "online", 00:25:32.018 "raid_level": "raid5f", 00:25:32.018 "superblock": false, 00:25:32.018 "num_base_bdevs": 3, 00:25:32.019 "num_base_bdevs_discovered": 3, 00:25:32.019 "num_base_bdevs_operational": 3, 00:25:32.019 "base_bdevs_list": [ 00:25:32.019 { 00:25:32.019 "name": "BaseBdev1", 00:25:32.019 "uuid": "5c68e0af-c278-5f80-9169-a8a5f327f3a2", 00:25:32.019 "is_configured": true, 00:25:32.019 "data_offset": 0, 00:25:32.019 "data_size": 65536 00:25:32.019 }, 00:25:32.019 { 00:25:32.019 "name": "BaseBdev2", 00:25:32.019 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:32.019 "is_configured": true, 00:25:32.019 "data_offset": 0, 00:25:32.019 "data_size": 65536 00:25:32.019 }, 00:25:32.019 { 00:25:32.019 "name": "BaseBdev3", 00:25:32.019 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:32.019 "is_configured": true, 00:25:32.019 "data_offset": 0, 00:25:32.019 "data_size": 65536 00:25:32.019 } 00:25:32.019 ] 00:25:32.019 }' 00:25:32.019 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.019 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.276 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:32.277 [2024-11-08 17:13:08.842198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:32.277 17:13:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:32.535 [2024-11-08 17:13:09.082082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:32.535 /dev/nbd0 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:32.535 1+0 records in 00:25:32.535 1+0 records out 00:25:32.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288325 s, 14.2 MB/s 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:25:32.535 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:25:33.106 512+0 records in 00:25:33.106 512+0 records out 00:25:33.106 67108864 bytes (67 MB, 64 MiB) copied, 0.482061 s, 139 MB/s 00:25:33.106 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:33.106 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:33.106 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:33.106 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:33.106 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:33.106 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:33.106 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:33.106 [2024-11-08 17:13:09.817529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.366 [2024-11-08 17:13:09.849868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.366 "name": "raid_bdev1", 00:25:33.366 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:33.366 "strip_size_kb": 64, 00:25:33.366 "state": "online", 00:25:33.366 "raid_level": "raid5f", 00:25:33.366 "superblock": false, 00:25:33.366 "num_base_bdevs": 3, 00:25:33.366 "num_base_bdevs_discovered": 2, 00:25:33.366 "num_base_bdevs_operational": 2, 00:25:33.366 "base_bdevs_list": [ 00:25:33.366 { 00:25:33.366 "name": null, 00:25:33.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.366 "is_configured": false, 00:25:33.366 "data_offset": 0, 00:25:33.366 "data_size": 65536 00:25:33.366 }, 00:25:33.366 { 00:25:33.366 "name": "BaseBdev2", 00:25:33.366 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:33.366 "is_configured": true, 00:25:33.366 "data_offset": 0, 00:25:33.366 "data_size": 65536 00:25:33.366 }, 00:25:33.366 { 00:25:33.366 "name": "BaseBdev3", 00:25:33.366 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:33.366 "is_configured": true, 00:25:33.366 "data_offset": 0, 00:25:33.366 "data_size": 65536 00:25:33.366 } 00:25:33.366 ] 00:25:33.366 }' 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.366 17:13:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.623 17:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:33.623 17:13:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.623 17:13:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.623 [2024-11-08 17:13:10.166012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:33.623 [2024-11-08 17:13:10.177521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:25:33.623 17:13:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.623 17:13:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:33.623 [2024-11-08 17:13:10.183329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:34.556 "name": "raid_bdev1", 00:25:34.556 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:34.556 "strip_size_kb": 64, 00:25:34.556 "state": "online", 00:25:34.556 "raid_level": "raid5f", 00:25:34.556 "superblock": false, 00:25:34.556 "num_base_bdevs": 3, 00:25:34.556 "num_base_bdevs_discovered": 3, 00:25:34.556 "num_base_bdevs_operational": 3, 00:25:34.556 "process": { 00:25:34.556 "type": "rebuild", 00:25:34.556 "target": "spare", 00:25:34.556 "progress": { 00:25:34.556 "blocks": 18432, 00:25:34.556 "percent": 14 00:25:34.556 } 00:25:34.556 }, 00:25:34.556 "base_bdevs_list": [ 00:25:34.556 { 00:25:34.556 "name": "spare", 00:25:34.556 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:34.556 "is_configured": true, 00:25:34.556 "data_offset": 0, 00:25:34.556 "data_size": 65536 00:25:34.556 }, 00:25:34.556 { 00:25:34.556 "name": "BaseBdev2", 00:25:34.556 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:34.556 "is_configured": true, 00:25:34.556 "data_offset": 0, 00:25:34.556 "data_size": 65536 00:25:34.556 }, 00:25:34.556 { 00:25:34.556 "name": "BaseBdev3", 00:25:34.556 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:34.556 "is_configured": true, 00:25:34.556 "data_offset": 0, 00:25:34.556 "data_size": 65536 00:25:34.556 } 00:25:34.556 ] 00:25:34.556 }' 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:34.556 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.814 [2024-11-08 17:13:11.293042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:34.814 [2024-11-08 17:13:11.295260] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:34.814 [2024-11-08 17:13:11.295317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.814 [2024-11-08 17:13:11.295336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:34.814 [2024-11-08 17:13:11.295344] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:34.814 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:34.815 "name": "raid_bdev1", 00:25:34.815 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:34.815 "strip_size_kb": 64, 00:25:34.815 "state": "online", 00:25:34.815 "raid_level": "raid5f", 00:25:34.815 "superblock": false, 00:25:34.815 "num_base_bdevs": 3, 00:25:34.815 "num_base_bdevs_discovered": 2, 00:25:34.815 "num_base_bdevs_operational": 2, 00:25:34.815 "base_bdevs_list": [ 00:25:34.815 { 00:25:34.815 "name": null, 00:25:34.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.815 "is_configured": false, 00:25:34.815 "data_offset": 0, 00:25:34.815 "data_size": 65536 00:25:34.815 }, 00:25:34.815 { 00:25:34.815 "name": "BaseBdev2", 00:25:34.815 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:34.815 "is_configured": true, 00:25:34.815 "data_offset": 0, 00:25:34.815 "data_size": 65536 00:25:34.815 }, 00:25:34.815 { 00:25:34.815 "name": "BaseBdev3", 00:25:34.815 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:34.815 "is_configured": true, 00:25:34.815 "data_offset": 0, 00:25:34.815 "data_size": 65536 00:25:34.815 } 00:25:34.815 ] 00:25:34.815 }' 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:34.815 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:35.073 "name": "raid_bdev1", 00:25:35.073 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:35.073 "strip_size_kb": 64, 00:25:35.073 "state": "online", 00:25:35.073 "raid_level": "raid5f", 00:25:35.073 "superblock": false, 00:25:35.073 "num_base_bdevs": 3, 00:25:35.073 "num_base_bdevs_discovered": 2, 00:25:35.073 "num_base_bdevs_operational": 2, 00:25:35.073 "base_bdevs_list": [ 00:25:35.073 { 00:25:35.073 "name": null, 00:25:35.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.073 "is_configured": false, 00:25:35.073 "data_offset": 0, 00:25:35.073 "data_size": 65536 00:25:35.073 }, 00:25:35.073 { 00:25:35.073 "name": "BaseBdev2", 00:25:35.073 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:35.073 "is_configured": true, 00:25:35.073 "data_offset": 0, 00:25:35.073 "data_size": 65536 00:25:35.073 }, 00:25:35.073 { 00:25:35.073 "name": "BaseBdev3", 00:25:35.073 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:35.073 "is_configured": true, 00:25:35.073 "data_offset": 0, 00:25:35.073 "data_size": 65536 00:25:35.073 } 00:25:35.073 ] 00:25:35.073 }' 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 [2024-11-08 17:13:11.727199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:35.073 [2024-11-08 17:13:11.738032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.073 17:13:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:35.073 [2024-11-08 17:13:11.743513] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.446 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:36.446 "name": "raid_bdev1", 00:25:36.446 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:36.446 "strip_size_kb": 64, 00:25:36.446 "state": "online", 00:25:36.446 "raid_level": "raid5f", 00:25:36.446 "superblock": false, 00:25:36.446 "num_base_bdevs": 3, 00:25:36.447 "num_base_bdevs_discovered": 3, 00:25:36.447 "num_base_bdevs_operational": 3, 00:25:36.447 "process": { 00:25:36.447 "type": "rebuild", 00:25:36.447 "target": "spare", 00:25:36.447 "progress": { 00:25:36.447 "blocks": 18432, 00:25:36.447 "percent": 14 00:25:36.447 } 00:25:36.447 }, 00:25:36.447 "base_bdevs_list": [ 00:25:36.447 { 00:25:36.447 "name": "spare", 00:25:36.447 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:36.447 "is_configured": true, 00:25:36.447 "data_offset": 0, 00:25:36.447 "data_size": 65536 00:25:36.447 }, 00:25:36.447 { 00:25:36.447 "name": "BaseBdev2", 00:25:36.447 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:36.447 "is_configured": true, 00:25:36.447 "data_offset": 0, 00:25:36.447 "data_size": 65536 00:25:36.447 }, 00:25:36.447 { 00:25:36.447 "name": "BaseBdev3", 00:25:36.447 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:36.447 "is_configured": true, 00:25:36.447 "data_offset": 0, 00:25:36.447 "data_size": 65536 00:25:36.447 } 00:25:36.447 ] 00:25:36.447 }' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:36.447 "name": "raid_bdev1", 00:25:36.447 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:36.447 "strip_size_kb": 64, 00:25:36.447 "state": "online", 00:25:36.447 "raid_level": "raid5f", 00:25:36.447 "superblock": false, 00:25:36.447 "num_base_bdevs": 3, 00:25:36.447 "num_base_bdevs_discovered": 3, 00:25:36.447 "num_base_bdevs_operational": 3, 00:25:36.447 "process": { 00:25:36.447 "type": "rebuild", 00:25:36.447 "target": "spare", 00:25:36.447 "progress": { 00:25:36.447 "blocks": 22528, 00:25:36.447 "percent": 17 00:25:36.447 } 00:25:36.447 }, 00:25:36.447 "base_bdevs_list": [ 00:25:36.447 { 00:25:36.447 "name": "spare", 00:25:36.447 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:36.447 "is_configured": true, 00:25:36.447 "data_offset": 0, 00:25:36.447 "data_size": 65536 00:25:36.447 }, 00:25:36.447 { 00:25:36.447 "name": "BaseBdev2", 00:25:36.447 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:36.447 "is_configured": true, 00:25:36.447 "data_offset": 0, 00:25:36.447 "data_size": 65536 00:25:36.447 }, 00:25:36.447 { 00:25:36.447 "name": "BaseBdev3", 00:25:36.447 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:36.447 "is_configured": true, 00:25:36.447 "data_offset": 0, 00:25:36.447 "data_size": 65536 00:25:36.447 } 00:25:36.447 ] 00:25:36.447 }' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.447 17:13:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.381 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:37.381 "name": "raid_bdev1", 00:25:37.381 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:37.381 "strip_size_kb": 64, 00:25:37.381 "state": "online", 00:25:37.382 "raid_level": "raid5f", 00:25:37.382 "superblock": false, 00:25:37.382 "num_base_bdevs": 3, 00:25:37.382 "num_base_bdevs_discovered": 3, 00:25:37.382 "num_base_bdevs_operational": 3, 00:25:37.382 "process": { 00:25:37.382 "type": "rebuild", 00:25:37.382 "target": "spare", 00:25:37.382 "progress": { 00:25:37.382 "blocks": 43008, 00:25:37.382 "percent": 32 00:25:37.382 } 00:25:37.382 }, 00:25:37.382 "base_bdevs_list": [ 00:25:37.382 { 00:25:37.382 "name": "spare", 00:25:37.382 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:37.382 "is_configured": true, 00:25:37.382 "data_offset": 0, 00:25:37.382 "data_size": 65536 00:25:37.382 }, 00:25:37.382 { 00:25:37.382 "name": "BaseBdev2", 00:25:37.382 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:37.382 "is_configured": true, 00:25:37.382 "data_offset": 0, 00:25:37.382 "data_size": 65536 00:25:37.382 }, 00:25:37.382 { 00:25:37.382 "name": "BaseBdev3", 00:25:37.382 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:37.382 "is_configured": true, 00:25:37.382 "data_offset": 0, 00:25:37.382 "data_size": 65536 00:25:37.382 } 00:25:37.382 ] 00:25:37.382 }' 00:25:37.382 17:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:37.382 17:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.382 17:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:37.382 17:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.382 17:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:38.756 "name": "raid_bdev1", 00:25:38.756 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:38.756 "strip_size_kb": 64, 00:25:38.756 "state": "online", 00:25:38.756 "raid_level": "raid5f", 00:25:38.756 "superblock": false, 00:25:38.756 "num_base_bdevs": 3, 00:25:38.756 "num_base_bdevs_discovered": 3, 00:25:38.756 "num_base_bdevs_operational": 3, 00:25:38.756 "process": { 00:25:38.756 "type": "rebuild", 00:25:38.756 "target": "spare", 00:25:38.756 "progress": { 00:25:38.756 "blocks": 65536, 00:25:38.756 "percent": 50 00:25:38.756 } 00:25:38.756 }, 00:25:38.756 "base_bdevs_list": [ 00:25:38.756 { 00:25:38.756 "name": "spare", 00:25:38.756 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:38.756 "is_configured": true, 00:25:38.756 "data_offset": 0, 00:25:38.756 "data_size": 65536 00:25:38.756 }, 00:25:38.756 { 00:25:38.756 "name": "BaseBdev2", 00:25:38.756 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:38.756 "is_configured": true, 00:25:38.756 "data_offset": 0, 00:25:38.756 "data_size": 65536 00:25:38.756 }, 00:25:38.756 { 00:25:38.756 "name": "BaseBdev3", 00:25:38.756 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:38.756 "is_configured": true, 00:25:38.756 "data_offset": 0, 00:25:38.756 "data_size": 65536 00:25:38.756 } 00:25:38.756 ] 00:25:38.756 }' 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.756 17:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:39.703 "name": "raid_bdev1", 00:25:39.703 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:39.703 "strip_size_kb": 64, 00:25:39.703 "state": "online", 00:25:39.703 "raid_level": "raid5f", 00:25:39.703 "superblock": false, 00:25:39.703 "num_base_bdevs": 3, 00:25:39.703 "num_base_bdevs_discovered": 3, 00:25:39.703 "num_base_bdevs_operational": 3, 00:25:39.703 "process": { 00:25:39.703 "type": "rebuild", 00:25:39.703 "target": "spare", 00:25:39.703 "progress": { 00:25:39.703 "blocks": 88064, 00:25:39.703 "percent": 67 00:25:39.703 } 00:25:39.703 }, 00:25:39.703 "base_bdevs_list": [ 00:25:39.703 { 00:25:39.703 "name": "spare", 00:25:39.703 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:39.703 "is_configured": true, 00:25:39.703 "data_offset": 0, 00:25:39.703 "data_size": 65536 00:25:39.703 }, 00:25:39.703 { 00:25:39.703 "name": "BaseBdev2", 00:25:39.703 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:39.703 "is_configured": true, 00:25:39.703 "data_offset": 0, 00:25:39.703 "data_size": 65536 00:25:39.703 }, 00:25:39.703 { 00:25:39.703 "name": "BaseBdev3", 00:25:39.703 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:39.703 "is_configured": true, 00:25:39.703 "data_offset": 0, 00:25:39.703 "data_size": 65536 00:25:39.703 } 00:25:39.703 ] 00:25:39.703 }' 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.703 17:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:40.657 "name": "raid_bdev1", 00:25:40.657 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:40.657 "strip_size_kb": 64, 00:25:40.657 "state": "online", 00:25:40.657 "raid_level": "raid5f", 00:25:40.657 "superblock": false, 00:25:40.657 "num_base_bdevs": 3, 00:25:40.657 "num_base_bdevs_discovered": 3, 00:25:40.657 "num_base_bdevs_operational": 3, 00:25:40.657 "process": { 00:25:40.657 "type": "rebuild", 00:25:40.657 "target": "spare", 00:25:40.657 "progress": { 00:25:40.657 "blocks": 110592, 00:25:40.657 "percent": 84 00:25:40.657 } 00:25:40.657 }, 00:25:40.657 "base_bdevs_list": [ 00:25:40.657 { 00:25:40.657 "name": "spare", 00:25:40.657 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:40.657 "is_configured": true, 00:25:40.657 "data_offset": 0, 00:25:40.657 "data_size": 65536 00:25:40.657 }, 00:25:40.657 { 00:25:40.657 "name": "BaseBdev2", 00:25:40.657 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:40.657 "is_configured": true, 00:25:40.657 "data_offset": 0, 00:25:40.657 "data_size": 65536 00:25:40.657 }, 00:25:40.657 { 00:25:40.657 "name": "BaseBdev3", 00:25:40.657 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:40.657 "is_configured": true, 00:25:40.657 "data_offset": 0, 00:25:40.657 "data_size": 65536 00:25:40.657 } 00:25:40.657 ] 00:25:40.657 }' 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:40.657 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.658 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:40.916 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.916 17:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:41.850 [2024-11-08 17:13:18.211343] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:41.850 [2024-11-08 17:13:18.211448] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:41.850 [2024-11-08 17:13:18.211498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.850 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:41.850 "name": "raid_bdev1", 00:25:41.850 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:41.850 "strip_size_kb": 64, 00:25:41.850 "state": "online", 00:25:41.850 "raid_level": "raid5f", 00:25:41.850 "superblock": false, 00:25:41.851 "num_base_bdevs": 3, 00:25:41.851 "num_base_bdevs_discovered": 3, 00:25:41.851 "num_base_bdevs_operational": 3, 00:25:41.851 "base_bdevs_list": [ 00:25:41.851 { 00:25:41.851 "name": "spare", 00:25:41.851 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:41.851 "is_configured": true, 00:25:41.851 "data_offset": 0, 00:25:41.851 "data_size": 65536 00:25:41.851 }, 00:25:41.851 { 00:25:41.851 "name": "BaseBdev2", 00:25:41.851 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:41.851 "is_configured": true, 00:25:41.851 "data_offset": 0, 00:25:41.851 "data_size": 65536 00:25:41.851 }, 00:25:41.851 { 00:25:41.851 "name": "BaseBdev3", 00:25:41.851 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:41.851 "is_configured": true, 00:25:41.851 "data_offset": 0, 00:25:41.851 "data_size": 65536 00:25:41.851 } 00:25:41.851 ] 00:25:41.851 }' 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:41.851 "name": "raid_bdev1", 00:25:41.851 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:41.851 "strip_size_kb": 64, 00:25:41.851 "state": "online", 00:25:41.851 "raid_level": "raid5f", 00:25:41.851 "superblock": false, 00:25:41.851 "num_base_bdevs": 3, 00:25:41.851 "num_base_bdevs_discovered": 3, 00:25:41.851 "num_base_bdevs_operational": 3, 00:25:41.851 "base_bdevs_list": [ 00:25:41.851 { 00:25:41.851 "name": "spare", 00:25:41.851 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:41.851 "is_configured": true, 00:25:41.851 "data_offset": 0, 00:25:41.851 "data_size": 65536 00:25:41.851 }, 00:25:41.851 { 00:25:41.851 "name": "BaseBdev2", 00:25:41.851 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:41.851 "is_configured": true, 00:25:41.851 "data_offset": 0, 00:25:41.851 "data_size": 65536 00:25:41.851 }, 00:25:41.851 { 00:25:41.851 "name": "BaseBdev3", 00:25:41.851 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:41.851 "is_configured": true, 00:25:41.851 "data_offset": 0, 00:25:41.851 "data_size": 65536 00:25:41.851 } 00:25:41.851 ] 00:25:41.851 }' 00:25:41.851 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.110 "name": "raid_bdev1", 00:25:42.110 "uuid": "42d4cc2e-1cf2-4374-8e48-293fe6010d60", 00:25:42.110 "strip_size_kb": 64, 00:25:42.110 "state": "online", 00:25:42.110 "raid_level": "raid5f", 00:25:42.110 "superblock": false, 00:25:42.110 "num_base_bdevs": 3, 00:25:42.110 "num_base_bdevs_discovered": 3, 00:25:42.110 "num_base_bdevs_operational": 3, 00:25:42.110 "base_bdevs_list": [ 00:25:42.110 { 00:25:42.110 "name": "spare", 00:25:42.110 "uuid": "733c6794-382f-5687-ac15-2a377bd6a022", 00:25:42.110 "is_configured": true, 00:25:42.110 "data_offset": 0, 00:25:42.110 "data_size": 65536 00:25:42.110 }, 00:25:42.110 { 00:25:42.110 "name": "BaseBdev2", 00:25:42.110 "uuid": "9c65f7e1-656f-55a3-8588-8d12f8f9461a", 00:25:42.110 "is_configured": true, 00:25:42.110 "data_offset": 0, 00:25:42.110 "data_size": 65536 00:25:42.110 }, 00:25:42.110 { 00:25:42.110 "name": "BaseBdev3", 00:25:42.110 "uuid": "a569ff7d-1218-58c1-84a5-b9ad2f56ed46", 00:25:42.110 "is_configured": true, 00:25:42.110 "data_offset": 0, 00:25:42.110 "data_size": 65536 00:25:42.110 } 00:25:42.110 ] 00:25:42.110 }' 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.110 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.395 [2024-11-08 17:13:18.946967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:42.395 [2024-11-08 17:13:18.946998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:42.395 [2024-11-08 17:13:18.947087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.395 [2024-11-08 17:13:18.947182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:42.395 [2024-11-08 17:13:18.947198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:42.395 17:13:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:42.653 /dev/nbd0 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:42.653 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:42.654 1+0 records in 00:25:42.654 1+0 records out 00:25:42.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338188 s, 12.1 MB/s 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:42.654 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:42.911 /dev/nbd1 00:25:42.911 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:42.912 1+0 records in 00:25:42.912 1+0 records out 00:25:42.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509487 s, 8.0 MB/s 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.912 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.169 17:13:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79976 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 79976 ']' 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 79976 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79976 00:25:43.427 killing process with pid 79976 00:25:43.427 Received shutdown signal, test time was about 60.000000 seconds 00:25:43.427 00:25:43.427 Latency(us) 00:25:43.427 [2024-11-08T17:13:20.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.427 [2024-11-08T17:13:20.142Z] =================================================================================================================== 00:25:43.427 [2024-11-08T17:13:20.142Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79976' 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 79976 00:25:43.427 17:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 79976 00:25:43.427 [2024-11-08 17:13:20.090086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:43.687 [2024-11-08 17:13:20.353020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:44.621 ************************************ 00:25:44.621 END TEST raid5f_rebuild_test 00:25:44.621 ************************************ 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:25:44.621 00:25:44.621 real 0m13.726s 00:25:44.621 user 0m16.495s 00:25:44.621 sys 0m1.626s 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.621 17:13:21 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:25:44.621 17:13:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:44.621 17:13:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:44.621 17:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:44.621 ************************************ 00:25:44.621 START TEST raid5f_rebuild_test_sb 00:25:44.621 ************************************ 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:44.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=80398 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 80398 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80398 ']' 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.621 17:13:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:44.621 [2024-11-08 17:13:21.234827] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:25:44.621 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:44.621 Zero copy mechanism will not be used. 00:25:44.621 [2024-11-08 17:13:21.235086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80398 ] 00:25:44.879 [2024-11-08 17:13:21.396988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.879 [2024-11-08 17:13:21.512314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.137 [2024-11-08 17:13:21.659037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.137 [2024-11-08 17:13:21.659252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.395 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:45.395 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:25:45.395 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:45.395 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:45.395 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.395 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 BaseBdev1_malloc 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 [2024-11-08 17:13:22.116403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:45.653 [2024-11-08 17:13:22.116483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.653 [2024-11-08 17:13:22.116505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:45.653 [2024-11-08 17:13:22.116516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.653 [2024-11-08 17:13:22.118865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.653 [2024-11-08 17:13:22.118904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:45.653 BaseBdev1 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 BaseBdev2_malloc 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 [2024-11-08 17:13:22.158467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:45.653 [2024-11-08 17:13:22.158633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.653 [2024-11-08 17:13:22.158674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:45.653 [2024-11-08 17:13:22.158798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.653 [2024-11-08 17:13:22.161003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.653 [2024-11-08 17:13:22.161116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:45.653 BaseBdev2 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 BaseBdev3_malloc 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 [2024-11-08 17:13:22.214175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:45.653 [2024-11-08 17:13:22.214226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.653 [2024-11-08 17:13:22.214246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:45.653 [2024-11-08 17:13:22.214257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.653 [2024-11-08 17:13:22.216448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.653 [2024-11-08 17:13:22.216484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:45.653 BaseBdev3 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 spare_malloc 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.653 spare_delay 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.653 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.654 [2024-11-08 17:13:22.264062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:45.654 [2024-11-08 17:13:22.264206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.654 [2024-11-08 17:13:22.264232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:45.654 [2024-11-08 17:13:22.264244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.654 [2024-11-08 17:13:22.266492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.654 [2024-11-08 17:13:22.266529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:45.654 spare 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.654 [2024-11-08 17:13:22.272133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:45.654 [2024-11-08 17:13:22.274052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:45.654 [2024-11-08 17:13:22.274117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:45.654 [2024-11-08 17:13:22.274296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:45.654 [2024-11-08 17:13:22.274309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:45.654 [2024-11-08 17:13:22.274563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:45.654 [2024-11-08 17:13:22.278345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:45.654 [2024-11-08 17:13:22.278465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:45.654 [2024-11-08 17:13:22.278649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.654 "name": "raid_bdev1", 00:25:45.654 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:45.654 "strip_size_kb": 64, 00:25:45.654 "state": "online", 00:25:45.654 "raid_level": "raid5f", 00:25:45.654 "superblock": true, 00:25:45.654 "num_base_bdevs": 3, 00:25:45.654 "num_base_bdevs_discovered": 3, 00:25:45.654 "num_base_bdevs_operational": 3, 00:25:45.654 "base_bdevs_list": [ 00:25:45.654 { 00:25:45.654 "name": "BaseBdev1", 00:25:45.654 "uuid": "ff3f23f5-d8c8-5ce4-b5c8-8ae80f7c5dde", 00:25:45.654 "is_configured": true, 00:25:45.654 "data_offset": 2048, 00:25:45.654 "data_size": 63488 00:25:45.654 }, 00:25:45.654 { 00:25:45.654 "name": "BaseBdev2", 00:25:45.654 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:45.654 "is_configured": true, 00:25:45.654 "data_offset": 2048, 00:25:45.654 "data_size": 63488 00:25:45.654 }, 00:25:45.654 { 00:25:45.654 "name": "BaseBdev3", 00:25:45.654 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:45.654 "is_configured": true, 00:25:45.654 "data_offset": 2048, 00:25:45.654 "data_size": 63488 00:25:45.654 } 00:25:45.654 ] 00:25:45.654 }' 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.654 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.911 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:45.911 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:45.911 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.911 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.911 [2024-11-08 17:13:22.607231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:45.911 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:46.170 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.171 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:46.171 [2024-11-08 17:13:22.859139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:46.171 /dev/nbd0 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.429 1+0 records in 00:25:46.429 1+0 records out 00:25:46.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857256 s, 4.8 MB/s 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:25:46.429 17:13:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:25:46.993 496+0 records in 00:25:46.993 496+0 records out 00:25:46.993 65011712 bytes (65 MB, 62 MiB) copied, 0.502764 s, 129 MB/s 00:25:46.994 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:46.994 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:46.994 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:46.994 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:46.994 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:46.994 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:46.994 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:47.251 [2024-11-08 17:13:23.709512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:47.251 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.252 [2024-11-08 17:13:23.723339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.252 "name": "raid_bdev1", 00:25:47.252 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:47.252 "strip_size_kb": 64, 00:25:47.252 "state": "online", 00:25:47.252 "raid_level": "raid5f", 00:25:47.252 "superblock": true, 00:25:47.252 "num_base_bdevs": 3, 00:25:47.252 "num_base_bdevs_discovered": 2, 00:25:47.252 "num_base_bdevs_operational": 2, 00:25:47.252 "base_bdevs_list": [ 00:25:47.252 { 00:25:47.252 "name": null, 00:25:47.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.252 "is_configured": false, 00:25:47.252 "data_offset": 0, 00:25:47.252 "data_size": 63488 00:25:47.252 }, 00:25:47.252 { 00:25:47.252 "name": "BaseBdev2", 00:25:47.252 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:47.252 "is_configured": true, 00:25:47.252 "data_offset": 2048, 00:25:47.252 "data_size": 63488 00:25:47.252 }, 00:25:47.252 { 00:25:47.252 "name": "BaseBdev3", 00:25:47.252 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:47.252 "is_configured": true, 00:25:47.252 "data_offset": 2048, 00:25:47.252 "data_size": 63488 00:25:47.252 } 00:25:47.252 ] 00:25:47.252 }' 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.252 17:13:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.511 17:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:47.511 17:13:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.511 17:13:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.511 [2024-11-08 17:13:24.067426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.511 [2024-11-08 17:13:24.078835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:25:47.511 17:13:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.511 17:13:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:47.511 [2024-11-08 17:13:24.084682] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.453 "name": "raid_bdev1", 00:25:48.453 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:48.453 "strip_size_kb": 64, 00:25:48.453 "state": "online", 00:25:48.453 "raid_level": "raid5f", 00:25:48.453 "superblock": true, 00:25:48.453 "num_base_bdevs": 3, 00:25:48.453 "num_base_bdevs_discovered": 3, 00:25:48.453 "num_base_bdevs_operational": 3, 00:25:48.453 "process": { 00:25:48.453 "type": "rebuild", 00:25:48.453 "target": "spare", 00:25:48.453 "progress": { 00:25:48.453 "blocks": 18432, 00:25:48.453 "percent": 14 00:25:48.453 } 00:25:48.453 }, 00:25:48.453 "base_bdevs_list": [ 00:25:48.453 { 00:25:48.453 "name": "spare", 00:25:48.453 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:48.453 "is_configured": true, 00:25:48.453 "data_offset": 2048, 00:25:48.453 "data_size": 63488 00:25:48.453 }, 00:25:48.453 { 00:25:48.453 "name": "BaseBdev2", 00:25:48.453 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:48.453 "is_configured": true, 00:25:48.453 "data_offset": 2048, 00:25:48.453 "data_size": 63488 00:25:48.453 }, 00:25:48.453 { 00:25:48.453 "name": "BaseBdev3", 00:25:48.453 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:48.453 "is_configured": true, 00:25:48.453 "data_offset": 2048, 00:25:48.453 "data_size": 63488 00:25:48.453 } 00:25:48.453 ] 00:25:48.453 }' 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.453 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.713 [2024-11-08 17:13:25.197905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.713 [2024-11-08 17:13:25.298185] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:48.713 [2024-11-08 17:13:25.298281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.713 [2024-11-08 17:13:25.298302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.713 [2024-11-08 17:13:25.298312] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.713 "name": "raid_bdev1", 00:25:48.713 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:48.713 "strip_size_kb": 64, 00:25:48.713 "state": "online", 00:25:48.713 "raid_level": "raid5f", 00:25:48.713 "superblock": true, 00:25:48.713 "num_base_bdevs": 3, 00:25:48.713 "num_base_bdevs_discovered": 2, 00:25:48.713 "num_base_bdevs_operational": 2, 00:25:48.713 "base_bdevs_list": [ 00:25:48.713 { 00:25:48.713 "name": null, 00:25:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.713 "is_configured": false, 00:25:48.713 "data_offset": 0, 00:25:48.713 "data_size": 63488 00:25:48.713 }, 00:25:48.713 { 00:25:48.713 "name": "BaseBdev2", 00:25:48.713 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:48.713 "is_configured": true, 00:25:48.713 "data_offset": 2048, 00:25:48.713 "data_size": 63488 00:25:48.713 }, 00:25:48.713 { 00:25:48.713 "name": "BaseBdev3", 00:25:48.713 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:48.713 "is_configured": true, 00:25:48.713 "data_offset": 2048, 00:25:48.713 "data_size": 63488 00:25:48.713 } 00:25:48.713 ] 00:25:48.713 }' 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.713 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.971 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:48.971 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.971 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:48.971 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:48.971 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.972 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.972 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.972 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.972 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.972 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.972 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.972 "name": "raid_bdev1", 00:25:48.972 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:48.972 "strip_size_kb": 64, 00:25:48.972 "state": "online", 00:25:48.972 "raid_level": "raid5f", 00:25:48.972 "superblock": true, 00:25:48.972 "num_base_bdevs": 3, 00:25:48.972 "num_base_bdevs_discovered": 2, 00:25:48.972 "num_base_bdevs_operational": 2, 00:25:48.972 "base_bdevs_list": [ 00:25:48.972 { 00:25:48.972 "name": null, 00:25:48.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.972 "is_configured": false, 00:25:48.972 "data_offset": 0, 00:25:48.972 "data_size": 63488 00:25:48.972 }, 00:25:48.972 { 00:25:48.972 "name": "BaseBdev2", 00:25:48.972 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:48.972 "is_configured": true, 00:25:48.972 "data_offset": 2048, 00:25:48.972 "data_size": 63488 00:25:48.972 }, 00:25:48.972 { 00:25:48.972 "name": "BaseBdev3", 00:25:48.972 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:48.972 "is_configured": true, 00:25:48.972 "data_offset": 2048, 00:25:48.972 "data_size": 63488 00:25:48.972 } 00:25:48.972 ] 00:25:48.972 }' 00:25:48.972 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.229 [2024-11-08 17:13:25.746260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.229 [2024-11-08 17:13:25.757280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.229 17:13:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:49.229 [2024-11-08 17:13:25.763060] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:50.186 "name": "raid_bdev1", 00:25:50.186 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:50.186 "strip_size_kb": 64, 00:25:50.186 "state": "online", 00:25:50.186 "raid_level": "raid5f", 00:25:50.186 "superblock": true, 00:25:50.186 "num_base_bdevs": 3, 00:25:50.186 "num_base_bdevs_discovered": 3, 00:25:50.186 "num_base_bdevs_operational": 3, 00:25:50.186 "process": { 00:25:50.186 "type": "rebuild", 00:25:50.186 "target": "spare", 00:25:50.186 "progress": { 00:25:50.186 "blocks": 18432, 00:25:50.186 "percent": 14 00:25:50.186 } 00:25:50.186 }, 00:25:50.186 "base_bdevs_list": [ 00:25:50.186 { 00:25:50.186 "name": "spare", 00:25:50.186 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:50.186 "is_configured": true, 00:25:50.186 "data_offset": 2048, 00:25:50.186 "data_size": 63488 00:25:50.186 }, 00:25:50.186 { 00:25:50.186 "name": "BaseBdev2", 00:25:50.186 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:50.186 "is_configured": true, 00:25:50.186 "data_offset": 2048, 00:25:50.186 "data_size": 63488 00:25:50.186 }, 00:25:50.186 { 00:25:50.186 "name": "BaseBdev3", 00:25:50.186 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:50.186 "is_configured": true, 00:25:50.186 "data_offset": 2048, 00:25:50.186 "data_size": 63488 00:25:50.186 } 00:25:50.186 ] 00:25:50.186 }' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:50.186 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:50.186 "name": "raid_bdev1", 00:25:50.186 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:50.186 "strip_size_kb": 64, 00:25:50.186 "state": "online", 00:25:50.186 "raid_level": "raid5f", 00:25:50.186 "superblock": true, 00:25:50.186 "num_base_bdevs": 3, 00:25:50.186 "num_base_bdevs_discovered": 3, 00:25:50.186 "num_base_bdevs_operational": 3, 00:25:50.186 "process": { 00:25:50.186 "type": "rebuild", 00:25:50.186 "target": "spare", 00:25:50.186 "progress": { 00:25:50.186 "blocks": 20480, 00:25:50.186 "percent": 16 00:25:50.186 } 00:25:50.186 }, 00:25:50.186 "base_bdevs_list": [ 00:25:50.186 { 00:25:50.186 "name": "spare", 00:25:50.186 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:50.186 "is_configured": true, 00:25:50.186 "data_offset": 2048, 00:25:50.186 "data_size": 63488 00:25:50.186 }, 00:25:50.186 { 00:25:50.186 "name": "BaseBdev2", 00:25:50.186 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:50.186 "is_configured": true, 00:25:50.186 "data_offset": 2048, 00:25:50.186 "data_size": 63488 00:25:50.186 }, 00:25:50.186 { 00:25:50.186 "name": "BaseBdev3", 00:25:50.186 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:50.186 "is_configured": true, 00:25:50.186 "data_offset": 2048, 00:25:50.186 "data_size": 63488 00:25:50.186 } 00:25:50.186 ] 00:25:50.186 }' 00:25:50.186 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:50.444 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.444 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:50.444 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.444 17:13:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.383 "name": "raid_bdev1", 00:25:51.383 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:51.383 "strip_size_kb": 64, 00:25:51.383 "state": "online", 00:25:51.383 "raid_level": "raid5f", 00:25:51.383 "superblock": true, 00:25:51.383 "num_base_bdevs": 3, 00:25:51.383 "num_base_bdevs_discovered": 3, 00:25:51.383 "num_base_bdevs_operational": 3, 00:25:51.383 "process": { 00:25:51.383 "type": "rebuild", 00:25:51.383 "target": "spare", 00:25:51.383 "progress": { 00:25:51.383 "blocks": 43008, 00:25:51.383 "percent": 33 00:25:51.383 } 00:25:51.383 }, 00:25:51.383 "base_bdevs_list": [ 00:25:51.383 { 00:25:51.383 "name": "spare", 00:25:51.383 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:51.383 "is_configured": true, 00:25:51.383 "data_offset": 2048, 00:25:51.383 "data_size": 63488 00:25:51.383 }, 00:25:51.383 { 00:25:51.383 "name": "BaseBdev2", 00:25:51.383 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:51.383 "is_configured": true, 00:25:51.383 "data_offset": 2048, 00:25:51.383 "data_size": 63488 00:25:51.383 }, 00:25:51.383 { 00:25:51.383 "name": "BaseBdev3", 00:25:51.383 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:51.383 "is_configured": true, 00:25:51.383 "data_offset": 2048, 00:25:51.383 "data_size": 63488 00:25:51.383 } 00:25:51.383 ] 00:25:51.383 }' 00:25:51.383 17:13:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.383 17:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.383 17:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.383 17:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.383 17:13:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:52.754 "name": "raid_bdev1", 00:25:52.754 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:52.754 "strip_size_kb": 64, 00:25:52.754 "state": "online", 00:25:52.754 "raid_level": "raid5f", 00:25:52.754 "superblock": true, 00:25:52.754 "num_base_bdevs": 3, 00:25:52.754 "num_base_bdevs_discovered": 3, 00:25:52.754 "num_base_bdevs_operational": 3, 00:25:52.754 "process": { 00:25:52.754 "type": "rebuild", 00:25:52.754 "target": "spare", 00:25:52.754 "progress": { 00:25:52.754 "blocks": 65536, 00:25:52.754 "percent": 51 00:25:52.754 } 00:25:52.754 }, 00:25:52.754 "base_bdevs_list": [ 00:25:52.754 { 00:25:52.754 "name": "spare", 00:25:52.754 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:52.754 "is_configured": true, 00:25:52.754 "data_offset": 2048, 00:25:52.754 "data_size": 63488 00:25:52.754 }, 00:25:52.754 { 00:25:52.754 "name": "BaseBdev2", 00:25:52.754 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:52.754 "is_configured": true, 00:25:52.754 "data_offset": 2048, 00:25:52.754 "data_size": 63488 00:25:52.754 }, 00:25:52.754 { 00:25:52.754 "name": "BaseBdev3", 00:25:52.754 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:52.754 "is_configured": true, 00:25:52.754 "data_offset": 2048, 00:25:52.754 "data_size": 63488 00:25:52.754 } 00:25:52.754 ] 00:25:52.754 }' 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.754 17:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:53.735 "name": "raid_bdev1", 00:25:53.735 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:53.735 "strip_size_kb": 64, 00:25:53.735 "state": "online", 00:25:53.735 "raid_level": "raid5f", 00:25:53.735 "superblock": true, 00:25:53.735 "num_base_bdevs": 3, 00:25:53.735 "num_base_bdevs_discovered": 3, 00:25:53.735 "num_base_bdevs_operational": 3, 00:25:53.735 "process": { 00:25:53.735 "type": "rebuild", 00:25:53.735 "target": "spare", 00:25:53.735 "progress": { 00:25:53.735 "blocks": 88064, 00:25:53.735 "percent": 69 00:25:53.735 } 00:25:53.735 }, 00:25:53.735 "base_bdevs_list": [ 00:25:53.735 { 00:25:53.735 "name": "spare", 00:25:53.735 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:53.735 "is_configured": true, 00:25:53.735 "data_offset": 2048, 00:25:53.735 "data_size": 63488 00:25:53.735 }, 00:25:53.735 { 00:25:53.735 "name": "BaseBdev2", 00:25:53.735 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:53.735 "is_configured": true, 00:25:53.735 "data_offset": 2048, 00:25:53.735 "data_size": 63488 00:25:53.735 }, 00:25:53.735 { 00:25:53.735 "name": "BaseBdev3", 00:25:53.735 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:53.735 "is_configured": true, 00:25:53.735 "data_offset": 2048, 00:25:53.735 "data_size": 63488 00:25:53.735 } 00:25:53.735 ] 00:25:53.735 }' 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.735 17:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:54.668 "name": "raid_bdev1", 00:25:54.668 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:54.668 "strip_size_kb": 64, 00:25:54.668 "state": "online", 00:25:54.668 "raid_level": "raid5f", 00:25:54.668 "superblock": true, 00:25:54.668 "num_base_bdevs": 3, 00:25:54.668 "num_base_bdevs_discovered": 3, 00:25:54.668 "num_base_bdevs_operational": 3, 00:25:54.668 "process": { 00:25:54.668 "type": "rebuild", 00:25:54.668 "target": "spare", 00:25:54.668 "progress": { 00:25:54.668 "blocks": 110592, 00:25:54.668 "percent": 87 00:25:54.668 } 00:25:54.668 }, 00:25:54.668 "base_bdevs_list": [ 00:25:54.668 { 00:25:54.668 "name": "spare", 00:25:54.668 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:54.668 "is_configured": true, 00:25:54.668 "data_offset": 2048, 00:25:54.668 "data_size": 63488 00:25:54.668 }, 00:25:54.668 { 00:25:54.668 "name": "BaseBdev2", 00:25:54.668 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:54.668 "is_configured": true, 00:25:54.668 "data_offset": 2048, 00:25:54.668 "data_size": 63488 00:25:54.668 }, 00:25:54.668 { 00:25:54.668 "name": "BaseBdev3", 00:25:54.668 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:54.668 "is_configured": true, 00:25:54.668 "data_offset": 2048, 00:25:54.668 "data_size": 63488 00:25:54.668 } 00:25:54.668 ] 00:25:54.668 }' 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.668 17:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:55.599 [2024-11-08 17:13:32.029736] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:55.599 [2024-11-08 17:13:32.029862] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:55.599 [2024-11-08 17:13:32.030000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:55.857 "name": "raid_bdev1", 00:25:55.857 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:55.857 "strip_size_kb": 64, 00:25:55.857 "state": "online", 00:25:55.857 "raid_level": "raid5f", 00:25:55.857 "superblock": true, 00:25:55.857 "num_base_bdevs": 3, 00:25:55.857 "num_base_bdevs_discovered": 3, 00:25:55.857 "num_base_bdevs_operational": 3, 00:25:55.857 "base_bdevs_list": [ 00:25:55.857 { 00:25:55.857 "name": "spare", 00:25:55.857 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "name": "BaseBdev2", 00:25:55.857 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "name": "BaseBdev3", 00:25:55.857 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 } 00:25:55.857 ] 00:25:55.857 }' 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:55.857 "name": "raid_bdev1", 00:25:55.857 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:55.857 "strip_size_kb": 64, 00:25:55.857 "state": "online", 00:25:55.857 "raid_level": "raid5f", 00:25:55.857 "superblock": true, 00:25:55.857 "num_base_bdevs": 3, 00:25:55.857 "num_base_bdevs_discovered": 3, 00:25:55.857 "num_base_bdevs_operational": 3, 00:25:55.857 "base_bdevs_list": [ 00:25:55.857 { 00:25:55.857 "name": "spare", 00:25:55.857 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "name": "BaseBdev2", 00:25:55.857 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 }, 00:25:55.857 { 00:25:55.857 "name": "BaseBdev3", 00:25:55.857 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:55.857 "is_configured": true, 00:25:55.857 "data_offset": 2048, 00:25:55.857 "data_size": 63488 00:25:55.857 } 00:25:55.857 ] 00:25:55.857 }' 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:55.857 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.115 "name": "raid_bdev1", 00:25:56.115 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:56.115 "strip_size_kb": 64, 00:25:56.115 "state": "online", 00:25:56.115 "raid_level": "raid5f", 00:25:56.115 "superblock": true, 00:25:56.115 "num_base_bdevs": 3, 00:25:56.115 "num_base_bdevs_discovered": 3, 00:25:56.115 "num_base_bdevs_operational": 3, 00:25:56.115 "base_bdevs_list": [ 00:25:56.115 { 00:25:56.115 "name": "spare", 00:25:56.115 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:56.115 "is_configured": true, 00:25:56.115 "data_offset": 2048, 00:25:56.115 "data_size": 63488 00:25:56.115 }, 00:25:56.115 { 00:25:56.115 "name": "BaseBdev2", 00:25:56.115 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:56.115 "is_configured": true, 00:25:56.115 "data_offset": 2048, 00:25:56.115 "data_size": 63488 00:25:56.115 }, 00:25:56.115 { 00:25:56.115 "name": "BaseBdev3", 00:25:56.115 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:56.115 "is_configured": true, 00:25:56.115 "data_offset": 2048, 00:25:56.115 "data_size": 63488 00:25:56.115 } 00:25:56.115 ] 00:25:56.115 }' 00:25:56.115 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.116 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.374 [2024-11-08 17:13:32.897313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:56.374 [2024-11-08 17:13:32.897348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:56.374 [2024-11-08 17:13:32.897441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:56.374 [2024-11-08 17:13:32.897535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:56.374 [2024-11-08 17:13:32.897552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:56.374 17:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:56.633 /dev/nbd0 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:56.633 1+0 records in 00:25:56.633 1+0 records out 00:25:56.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360329 s, 11.4 MB/s 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:56.633 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:56.891 /dev/nbd1 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:56.891 1+0 records in 00:25:56.891 1+0 records out 00:25:56.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424608 s, 9.6 MB/s 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:56.891 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.149 17:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.407 [2024-11-08 17:13:34.027482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:57.407 [2024-11-08 17:13:34.027554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.407 [2024-11-08 17:13:34.027578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:57.407 [2024-11-08 17:13:34.027591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.407 [2024-11-08 17:13:34.030053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.407 [2024-11-08 17:13:34.030091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:57.407 [2024-11-08 17:13:34.030240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:57.407 [2024-11-08 17:13:34.030295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:57.407 [2024-11-08 17:13:34.030399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.407 [2024-11-08 17:13:34.030499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:57.407 spare 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.407 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.666 [2024-11-08 17:13:34.130611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:57.666 [2024-11-08 17:13:34.130677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:57.666 [2024-11-08 17:13:34.131067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:25:57.666 [2024-11-08 17:13:34.134644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:57.666 [2024-11-08 17:13:34.134668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:57.666 [2024-11-08 17:13:34.134895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.666 "name": "raid_bdev1", 00:25:57.666 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:57.666 "strip_size_kb": 64, 00:25:57.666 "state": "online", 00:25:57.666 "raid_level": "raid5f", 00:25:57.666 "superblock": true, 00:25:57.666 "num_base_bdevs": 3, 00:25:57.666 "num_base_bdevs_discovered": 3, 00:25:57.666 "num_base_bdevs_operational": 3, 00:25:57.666 "base_bdevs_list": [ 00:25:57.666 { 00:25:57.666 "name": "spare", 00:25:57.666 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:57.666 "is_configured": true, 00:25:57.666 "data_offset": 2048, 00:25:57.666 "data_size": 63488 00:25:57.666 }, 00:25:57.666 { 00:25:57.666 "name": "BaseBdev2", 00:25:57.666 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:57.666 "is_configured": true, 00:25:57.666 "data_offset": 2048, 00:25:57.666 "data_size": 63488 00:25:57.666 }, 00:25:57.666 { 00:25:57.666 "name": "BaseBdev3", 00:25:57.666 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:57.666 "is_configured": true, 00:25:57.666 "data_offset": 2048, 00:25:57.666 "data_size": 63488 00:25:57.666 } 00:25:57.666 ] 00:25:57.666 }' 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.666 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:57.925 "name": "raid_bdev1", 00:25:57.925 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:57.925 "strip_size_kb": 64, 00:25:57.925 "state": "online", 00:25:57.925 "raid_level": "raid5f", 00:25:57.925 "superblock": true, 00:25:57.925 "num_base_bdevs": 3, 00:25:57.925 "num_base_bdevs_discovered": 3, 00:25:57.925 "num_base_bdevs_operational": 3, 00:25:57.925 "base_bdevs_list": [ 00:25:57.925 { 00:25:57.925 "name": "spare", 00:25:57.925 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:57.925 "is_configured": true, 00:25:57.925 "data_offset": 2048, 00:25:57.925 "data_size": 63488 00:25:57.925 }, 00:25:57.925 { 00:25:57.925 "name": "BaseBdev2", 00:25:57.925 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:57.925 "is_configured": true, 00:25:57.925 "data_offset": 2048, 00:25:57.925 "data_size": 63488 00:25:57.925 }, 00:25:57.925 { 00:25:57.925 "name": "BaseBdev3", 00:25:57.925 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:57.925 "is_configured": true, 00:25:57.925 "data_offset": 2048, 00:25:57.925 "data_size": 63488 00:25:57.925 } 00:25:57.925 ] 00:25:57.925 }' 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.925 [2024-11-08 17:13:34.599285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.925 "name": "raid_bdev1", 00:25:57.925 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:57.925 "strip_size_kb": 64, 00:25:57.925 "state": "online", 00:25:57.925 "raid_level": "raid5f", 00:25:57.925 "superblock": true, 00:25:57.925 "num_base_bdevs": 3, 00:25:57.925 "num_base_bdevs_discovered": 2, 00:25:57.925 "num_base_bdevs_operational": 2, 00:25:57.925 "base_bdevs_list": [ 00:25:57.925 { 00:25:57.925 "name": null, 00:25:57.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.925 "is_configured": false, 00:25:57.925 "data_offset": 0, 00:25:57.925 "data_size": 63488 00:25:57.925 }, 00:25:57.925 { 00:25:57.925 "name": "BaseBdev2", 00:25:57.925 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:57.925 "is_configured": true, 00:25:57.925 "data_offset": 2048, 00:25:57.925 "data_size": 63488 00:25:57.925 }, 00:25:57.925 { 00:25:57.925 "name": "BaseBdev3", 00:25:57.925 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:57.925 "is_configured": true, 00:25:57.925 "data_offset": 2048, 00:25:57.925 "data_size": 63488 00:25:57.925 } 00:25:57.925 ] 00:25:57.925 }' 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.925 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.492 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:58.492 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.492 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.492 [2024-11-08 17:13:34.931369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.492 [2024-11-08 17:13:34.931593] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:58.492 [2024-11-08 17:13:34.931612] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:58.492 [2024-11-08 17:13:34.931661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:58.492 [2024-11-08 17:13:34.942866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:25:58.492 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.492 17:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:58.492 [2024-11-08 17:13:34.948409] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.461 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:59.461 "name": "raid_bdev1", 00:25:59.461 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:59.461 "strip_size_kb": 64, 00:25:59.461 "state": "online", 00:25:59.461 "raid_level": "raid5f", 00:25:59.461 "superblock": true, 00:25:59.461 "num_base_bdevs": 3, 00:25:59.461 "num_base_bdevs_discovered": 3, 00:25:59.461 "num_base_bdevs_operational": 3, 00:25:59.461 "process": { 00:25:59.461 "type": "rebuild", 00:25:59.461 "target": "spare", 00:25:59.461 "progress": { 00:25:59.461 "blocks": 18432, 00:25:59.461 "percent": 14 00:25:59.461 } 00:25:59.461 }, 00:25:59.461 "base_bdevs_list": [ 00:25:59.461 { 00:25:59.461 "name": "spare", 00:25:59.461 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:25:59.461 "is_configured": true, 00:25:59.461 "data_offset": 2048, 00:25:59.461 "data_size": 63488 00:25:59.461 }, 00:25:59.461 { 00:25:59.461 "name": "BaseBdev2", 00:25:59.461 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:59.461 "is_configured": true, 00:25:59.462 "data_offset": 2048, 00:25:59.462 "data_size": 63488 00:25:59.462 }, 00:25:59.462 { 00:25:59.462 "name": "BaseBdev3", 00:25:59.462 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:59.462 "is_configured": true, 00:25:59.462 "data_offset": 2048, 00:25:59.462 "data_size": 63488 00:25:59.462 } 00:25:59.462 ] 00:25:59.462 }' 00:25:59.462 17:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.462 [2024-11-08 17:13:36.057534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.462 [2024-11-08 17:13:36.060410] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:59.462 [2024-11-08 17:13:36.060475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.462 [2024-11-08 17:13:36.060492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:59.462 [2024-11-08 17:13:36.060501] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.462 "name": "raid_bdev1", 00:25:59.462 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:25:59.462 "strip_size_kb": 64, 00:25:59.462 "state": "online", 00:25:59.462 "raid_level": "raid5f", 00:25:59.462 "superblock": true, 00:25:59.462 "num_base_bdevs": 3, 00:25:59.462 "num_base_bdevs_discovered": 2, 00:25:59.462 "num_base_bdevs_operational": 2, 00:25:59.462 "base_bdevs_list": [ 00:25:59.462 { 00:25:59.462 "name": null, 00:25:59.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.462 "is_configured": false, 00:25:59.462 "data_offset": 0, 00:25:59.462 "data_size": 63488 00:25:59.462 }, 00:25:59.462 { 00:25:59.462 "name": "BaseBdev2", 00:25:59.462 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:25:59.462 "is_configured": true, 00:25:59.462 "data_offset": 2048, 00:25:59.462 "data_size": 63488 00:25:59.462 }, 00:25:59.462 { 00:25:59.462 "name": "BaseBdev3", 00:25:59.462 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:25:59.462 "is_configured": true, 00:25:59.462 "data_offset": 2048, 00:25:59.462 "data_size": 63488 00:25:59.462 } 00:25:59.462 ] 00:25:59.462 }' 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.462 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.720 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:59.720 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.720 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.720 [2024-11-08 17:13:36.432282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:59.720 [2024-11-08 17:13:36.432359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.720 [2024-11-08 17:13:36.432383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:59.720 [2024-11-08 17:13:36.432398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.720 [2024-11-08 17:13:36.432936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.720 [2024-11-08 17:13:36.432962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:59.720 [2024-11-08 17:13:36.433064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:59.720 [2024-11-08 17:13:36.433082] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:59.720 [2024-11-08 17:13:36.433092] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:59.720 [2024-11-08 17:13:36.433121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:59.977 [2024-11-08 17:13:36.444074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:25:59.977 spare 00:25:59.977 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.977 17:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:59.977 [2024-11-08 17:13:36.449545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:00.913 "name": "raid_bdev1", 00:26:00.913 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:26:00.913 "strip_size_kb": 64, 00:26:00.913 "state": "online", 00:26:00.913 "raid_level": "raid5f", 00:26:00.913 "superblock": true, 00:26:00.913 "num_base_bdevs": 3, 00:26:00.913 "num_base_bdevs_discovered": 3, 00:26:00.913 "num_base_bdevs_operational": 3, 00:26:00.913 "process": { 00:26:00.913 "type": "rebuild", 00:26:00.913 "target": "spare", 00:26:00.913 "progress": { 00:26:00.913 "blocks": 18432, 00:26:00.913 "percent": 14 00:26:00.913 } 00:26:00.913 }, 00:26:00.913 "base_bdevs_list": [ 00:26:00.913 { 00:26:00.913 "name": "spare", 00:26:00.913 "uuid": "0c005a03-03a5-5549-bfef-4cbac6cf96c0", 00:26:00.913 "is_configured": true, 00:26:00.913 "data_offset": 2048, 00:26:00.913 "data_size": 63488 00:26:00.913 }, 00:26:00.913 { 00:26:00.913 "name": "BaseBdev2", 00:26:00.913 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:26:00.913 "is_configured": true, 00:26:00.913 "data_offset": 2048, 00:26:00.913 "data_size": 63488 00:26:00.913 }, 00:26:00.913 { 00:26:00.913 "name": "BaseBdev3", 00:26:00.913 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:26:00.913 "is_configured": true, 00:26:00.913 "data_offset": 2048, 00:26:00.913 "data_size": 63488 00:26:00.913 } 00:26:00.913 ] 00:26:00.913 }' 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.913 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.913 [2024-11-08 17:13:37.563260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:01.171 [2024-11-08 17:13:37.662579] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:01.171 [2024-11-08 17:13:37.662652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:01.171 [2024-11-08 17:13:37.662672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:01.171 [2024-11-08 17:13:37.662681] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.171 "name": "raid_bdev1", 00:26:01.171 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:26:01.171 "strip_size_kb": 64, 00:26:01.171 "state": "online", 00:26:01.171 "raid_level": "raid5f", 00:26:01.171 "superblock": true, 00:26:01.171 "num_base_bdevs": 3, 00:26:01.171 "num_base_bdevs_discovered": 2, 00:26:01.171 "num_base_bdevs_operational": 2, 00:26:01.171 "base_bdevs_list": [ 00:26:01.171 { 00:26:01.171 "name": null, 00:26:01.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.171 "is_configured": false, 00:26:01.171 "data_offset": 0, 00:26:01.171 "data_size": 63488 00:26:01.171 }, 00:26:01.171 { 00:26:01.171 "name": "BaseBdev2", 00:26:01.171 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:26:01.171 "is_configured": true, 00:26:01.171 "data_offset": 2048, 00:26:01.171 "data_size": 63488 00:26:01.171 }, 00:26:01.171 { 00:26:01.171 "name": "BaseBdev3", 00:26:01.171 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:26:01.171 "is_configured": true, 00:26:01.171 "data_offset": 2048, 00:26:01.171 "data_size": 63488 00:26:01.171 } 00:26:01.171 ] 00:26:01.171 }' 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.171 17:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:01.429 "name": "raid_bdev1", 00:26:01.429 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:26:01.429 "strip_size_kb": 64, 00:26:01.429 "state": "online", 00:26:01.429 "raid_level": "raid5f", 00:26:01.429 "superblock": true, 00:26:01.429 "num_base_bdevs": 3, 00:26:01.429 "num_base_bdevs_discovered": 2, 00:26:01.429 "num_base_bdevs_operational": 2, 00:26:01.429 "base_bdevs_list": [ 00:26:01.429 { 00:26:01.429 "name": null, 00:26:01.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.429 "is_configured": false, 00:26:01.429 "data_offset": 0, 00:26:01.429 "data_size": 63488 00:26:01.429 }, 00:26:01.429 { 00:26:01.429 "name": "BaseBdev2", 00:26:01.429 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:26:01.429 "is_configured": true, 00:26:01.429 "data_offset": 2048, 00:26:01.429 "data_size": 63488 00:26:01.429 }, 00:26:01.429 { 00:26:01.429 "name": "BaseBdev3", 00:26:01.429 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:26:01.429 "is_configured": true, 00:26:01.429 "data_offset": 2048, 00:26:01.429 "data_size": 63488 00:26:01.429 } 00:26:01.429 ] 00:26:01.429 }' 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.429 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.429 [2024-11-08 17:13:38.138669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:01.429 [2024-11-08 17:13:38.138736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.429 [2024-11-08 17:13:38.138774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:01.429 [2024-11-08 17:13:38.138785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.429 [2024-11-08 17:13:38.139292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.429 [2024-11-08 17:13:38.139315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:01.429 [2024-11-08 17:13:38.139402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:01.429 [2024-11-08 17:13:38.139418] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:01.429 [2024-11-08 17:13:38.139430] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:01.429 [2024-11-08 17:13:38.139447] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:26:01.687 BaseBdev1 00:26:01.687 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.687 17:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.619 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.619 "name": "raid_bdev1", 00:26:02.619 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:26:02.619 "strip_size_kb": 64, 00:26:02.619 "state": "online", 00:26:02.619 "raid_level": "raid5f", 00:26:02.619 "superblock": true, 00:26:02.619 "num_base_bdevs": 3, 00:26:02.619 "num_base_bdevs_discovered": 2, 00:26:02.619 "num_base_bdevs_operational": 2, 00:26:02.619 "base_bdevs_list": [ 00:26:02.619 { 00:26:02.619 "name": null, 00:26:02.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.620 "is_configured": false, 00:26:02.620 "data_offset": 0, 00:26:02.620 "data_size": 63488 00:26:02.620 }, 00:26:02.620 { 00:26:02.620 "name": "BaseBdev2", 00:26:02.620 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:26:02.620 "is_configured": true, 00:26:02.620 "data_offset": 2048, 00:26:02.620 "data_size": 63488 00:26:02.620 }, 00:26:02.620 { 00:26:02.620 "name": "BaseBdev3", 00:26:02.620 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:26:02.620 "is_configured": true, 00:26:02.620 "data_offset": 2048, 00:26:02.620 "data_size": 63488 00:26:02.620 } 00:26:02.620 ] 00:26:02.620 }' 00:26:02.620 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.620 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:02.877 "name": "raid_bdev1", 00:26:02.877 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:26:02.877 "strip_size_kb": 64, 00:26:02.877 "state": "online", 00:26:02.877 "raid_level": "raid5f", 00:26:02.877 "superblock": true, 00:26:02.877 "num_base_bdevs": 3, 00:26:02.877 "num_base_bdevs_discovered": 2, 00:26:02.877 "num_base_bdevs_operational": 2, 00:26:02.877 "base_bdevs_list": [ 00:26:02.877 { 00:26:02.877 "name": null, 00:26:02.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.877 "is_configured": false, 00:26:02.877 "data_offset": 0, 00:26:02.877 "data_size": 63488 00:26:02.877 }, 00:26:02.877 { 00:26:02.877 "name": "BaseBdev2", 00:26:02.877 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:26:02.877 "is_configured": true, 00:26:02.877 "data_offset": 2048, 00:26:02.877 "data_size": 63488 00:26:02.877 }, 00:26:02.877 { 00:26:02.877 "name": "BaseBdev3", 00:26:02.877 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:26:02.877 "is_configured": true, 00:26:02.877 "data_offset": 2048, 00:26:02.877 "data_size": 63488 00:26:02.877 } 00:26:02.877 ] 00:26:02.877 }' 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.877 [2024-11-08 17:13:39.575084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:02.877 [2024-11-08 17:13:39.575268] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:02.877 [2024-11-08 17:13:39.575287] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:02.877 request: 00:26:02.877 { 00:26:02.877 "base_bdev": "BaseBdev1", 00:26:02.877 "raid_bdev": "raid_bdev1", 00:26:02.877 "method": "bdev_raid_add_base_bdev", 00:26:02.877 "req_id": 1 00:26:02.877 } 00:26:02.877 Got JSON-RPC error response 00:26:02.877 response: 00:26:02.877 { 00:26:02.877 "code": -22, 00:26:02.877 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:02.877 } 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:02.877 17:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.257 "name": "raid_bdev1", 00:26:04.257 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:26:04.257 "strip_size_kb": 64, 00:26:04.257 "state": "online", 00:26:04.257 "raid_level": "raid5f", 00:26:04.257 "superblock": true, 00:26:04.257 "num_base_bdevs": 3, 00:26:04.257 "num_base_bdevs_discovered": 2, 00:26:04.257 "num_base_bdevs_operational": 2, 00:26:04.257 "base_bdevs_list": [ 00:26:04.257 { 00:26:04.257 "name": null, 00:26:04.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.257 "is_configured": false, 00:26:04.257 "data_offset": 0, 00:26:04.257 "data_size": 63488 00:26:04.257 }, 00:26:04.257 { 00:26:04.257 "name": "BaseBdev2", 00:26:04.257 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:26:04.257 "is_configured": true, 00:26:04.257 "data_offset": 2048, 00:26:04.257 "data_size": 63488 00:26:04.257 }, 00:26:04.257 { 00:26:04.257 "name": "BaseBdev3", 00:26:04.257 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:26:04.257 "is_configured": true, 00:26:04.257 "data_offset": 2048, 00:26:04.257 "data_size": 63488 00:26:04.257 } 00:26:04.257 ] 00:26:04.257 }' 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.257 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:04.257 "name": "raid_bdev1", 00:26:04.257 "uuid": "6f6f82e8-c715-422c-92e8-9392593cd0c8", 00:26:04.257 "strip_size_kb": 64, 00:26:04.257 "state": "online", 00:26:04.257 "raid_level": "raid5f", 00:26:04.257 "superblock": true, 00:26:04.257 "num_base_bdevs": 3, 00:26:04.257 "num_base_bdevs_discovered": 2, 00:26:04.257 "num_base_bdevs_operational": 2, 00:26:04.257 "base_bdevs_list": [ 00:26:04.257 { 00:26:04.257 "name": null, 00:26:04.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.257 "is_configured": false, 00:26:04.257 "data_offset": 0, 00:26:04.257 "data_size": 63488 00:26:04.257 }, 00:26:04.257 { 00:26:04.257 "name": "BaseBdev2", 00:26:04.257 "uuid": "4cb3af94-c5ac-5830-a57d-f26b059f077d", 00:26:04.257 "is_configured": true, 00:26:04.257 "data_offset": 2048, 00:26:04.258 "data_size": 63488 00:26:04.258 }, 00:26:04.258 { 00:26:04.258 "name": "BaseBdev3", 00:26:04.258 "uuid": "459b3032-a248-5020-9e2b-9ffa0b062fad", 00:26:04.258 "is_configured": true, 00:26:04.258 "data_offset": 2048, 00:26:04.258 "data_size": 63488 00:26:04.258 } 00:26:04.258 ] 00:26:04.258 }' 00:26:04.258 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:04.515 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:04.515 17:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 80398 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80398 ']' 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 80398 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80398 00:26:04.515 killing process with pid 80398 00:26:04.515 Received shutdown signal, test time was about 60.000000 seconds 00:26:04.515 00:26:04.515 Latency(us) 00:26:04.515 [2024-11-08T17:13:41.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.515 [2024-11-08T17:13:41.230Z] =================================================================================================================== 00:26:04.515 [2024-11-08T17:13:41.230Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80398' 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 80398 00:26:04.515 17:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 80398 00:26:04.515 [2024-11-08 17:13:41.039148] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:04.515 [2024-11-08 17:13:41.039286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:04.515 [2024-11-08 17:13:41.039374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:04.515 [2024-11-08 17:13:41.039393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:04.773 [2024-11-08 17:13:41.297043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:05.340 ************************************ 00:26:05.340 END TEST raid5f_rebuild_test_sb 00:26:05.340 ************************************ 00:26:05.340 17:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:26:05.340 00:26:05.340 real 0m20.878s 00:26:05.340 user 0m25.875s 00:26:05.340 sys 0m2.195s 00:26:05.340 17:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:05.340 17:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.597 17:13:42 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:26:05.597 17:13:42 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:26:05.597 17:13:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:05.597 17:13:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:05.597 17:13:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:05.597 ************************************ 00:26:05.597 START TEST raid5f_state_function_test 00:26:05.597 ************************************ 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:05.597 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:05.598 Process raid pid: 81128 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81128 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81128' 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81128 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 81128 ']' 00:26:05.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:05.598 17:13:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.598 [2024-11-08 17:13:42.196863] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:26:05.598 [2024-11-08 17:13:42.197011] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.855 [2024-11-08 17:13:42.364547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.855 [2024-11-08 17:13:42.490161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.113 [2024-11-08 17:13:42.640555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:06.113 [2024-11-08 17:13:42.640611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:06.698 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:06.698 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:26:06.698 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:06.698 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.698 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.699 [2024-11-08 17:13:43.128431] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:06.699 [2024-11-08 17:13:43.128492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:06.699 [2024-11-08 17:13:43.128505] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:06.699 [2024-11-08 17:13:43.128516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:06.699 [2024-11-08 17:13:43.128523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:06.699 [2024-11-08 17:13:43.128533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:06.699 [2024-11-08 17:13:43.128540] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:06.699 [2024-11-08 17:13:43.128550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.699 "name": "Existed_Raid", 00:26:06.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.699 "strip_size_kb": 64, 00:26:06.699 "state": "configuring", 00:26:06.699 "raid_level": "raid5f", 00:26:06.699 "superblock": false, 00:26:06.699 "num_base_bdevs": 4, 00:26:06.699 "num_base_bdevs_discovered": 0, 00:26:06.699 "num_base_bdevs_operational": 4, 00:26:06.699 "base_bdevs_list": [ 00:26:06.699 { 00:26:06.699 "name": "BaseBdev1", 00:26:06.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.699 "is_configured": false, 00:26:06.699 "data_offset": 0, 00:26:06.699 "data_size": 0 00:26:06.699 }, 00:26:06.699 { 00:26:06.699 "name": "BaseBdev2", 00:26:06.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.699 "is_configured": false, 00:26:06.699 "data_offset": 0, 00:26:06.699 "data_size": 0 00:26:06.699 }, 00:26:06.699 { 00:26:06.699 "name": "BaseBdev3", 00:26:06.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.699 "is_configured": false, 00:26:06.699 "data_offset": 0, 00:26:06.699 "data_size": 0 00:26:06.699 }, 00:26:06.699 { 00:26:06.699 "name": "BaseBdev4", 00:26:06.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.699 "is_configured": false, 00:26:06.699 "data_offset": 0, 00:26:06.699 "data_size": 0 00:26:06.699 } 00:26:06.699 ] 00:26:06.699 }' 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.699 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.957 [2024-11-08 17:13:43.460437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:06.957 [2024-11-08 17:13:43.460482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.957 [2024-11-08 17:13:43.468429] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:06.957 [2024-11-08 17:13:43.468471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:06.957 [2024-11-08 17:13:43.468480] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:06.957 [2024-11-08 17:13:43.468489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:06.957 [2024-11-08 17:13:43.468495] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:06.957 [2024-11-08 17:13:43.468504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:06.957 [2024-11-08 17:13:43.468510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:06.957 [2024-11-08 17:13:43.468519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.957 [2024-11-08 17:13:43.502906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.957 BaseBdev1 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:06.957 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.958 [ 00:26:06.958 { 00:26:06.958 "name": "BaseBdev1", 00:26:06.958 "aliases": [ 00:26:06.958 "3de33940-e529-4788-9fa2-851fe403b741" 00:26:06.958 ], 00:26:06.958 "product_name": "Malloc disk", 00:26:06.958 "block_size": 512, 00:26:06.958 "num_blocks": 65536, 00:26:06.958 "uuid": "3de33940-e529-4788-9fa2-851fe403b741", 00:26:06.958 "assigned_rate_limits": { 00:26:06.958 "rw_ios_per_sec": 0, 00:26:06.958 "rw_mbytes_per_sec": 0, 00:26:06.958 "r_mbytes_per_sec": 0, 00:26:06.958 "w_mbytes_per_sec": 0 00:26:06.958 }, 00:26:06.958 "claimed": true, 00:26:06.958 "claim_type": "exclusive_write", 00:26:06.958 "zoned": false, 00:26:06.958 "supported_io_types": { 00:26:06.958 "read": true, 00:26:06.958 "write": true, 00:26:06.958 "unmap": true, 00:26:06.958 "flush": true, 00:26:06.958 "reset": true, 00:26:06.958 "nvme_admin": false, 00:26:06.958 "nvme_io": false, 00:26:06.958 "nvme_io_md": false, 00:26:06.958 "write_zeroes": true, 00:26:06.958 "zcopy": true, 00:26:06.958 "get_zone_info": false, 00:26:06.958 "zone_management": false, 00:26:06.958 "zone_append": false, 00:26:06.958 "compare": false, 00:26:06.958 "compare_and_write": false, 00:26:06.958 "abort": true, 00:26:06.958 "seek_hole": false, 00:26:06.958 "seek_data": false, 00:26:06.958 "copy": true, 00:26:06.958 "nvme_iov_md": false 00:26:06.958 }, 00:26:06.958 "memory_domains": [ 00:26:06.958 { 00:26:06.958 "dma_device_id": "system", 00:26:06.958 "dma_device_type": 1 00:26:06.958 }, 00:26:06.958 { 00:26:06.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.958 "dma_device_type": 2 00:26:06.958 } 00:26:06.958 ], 00:26:06.958 "driver_specific": {} 00:26:06.958 } 00:26:06.958 ] 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.958 "name": "Existed_Raid", 00:26:06.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.958 "strip_size_kb": 64, 00:26:06.958 "state": "configuring", 00:26:06.958 "raid_level": "raid5f", 00:26:06.958 "superblock": false, 00:26:06.958 "num_base_bdevs": 4, 00:26:06.958 "num_base_bdevs_discovered": 1, 00:26:06.958 "num_base_bdevs_operational": 4, 00:26:06.958 "base_bdevs_list": [ 00:26:06.958 { 00:26:06.958 "name": "BaseBdev1", 00:26:06.958 "uuid": "3de33940-e529-4788-9fa2-851fe403b741", 00:26:06.958 "is_configured": true, 00:26:06.958 "data_offset": 0, 00:26:06.958 "data_size": 65536 00:26:06.958 }, 00:26:06.958 { 00:26:06.958 "name": "BaseBdev2", 00:26:06.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.958 "is_configured": false, 00:26:06.958 "data_offset": 0, 00:26:06.958 "data_size": 0 00:26:06.958 }, 00:26:06.958 { 00:26:06.958 "name": "BaseBdev3", 00:26:06.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.958 "is_configured": false, 00:26:06.958 "data_offset": 0, 00:26:06.958 "data_size": 0 00:26:06.958 }, 00:26:06.958 { 00:26:06.958 "name": "BaseBdev4", 00:26:06.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.958 "is_configured": false, 00:26:06.958 "data_offset": 0, 00:26:06.958 "data_size": 0 00:26:06.958 } 00:26:06.958 ] 00:26:06.958 }' 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.958 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.217 [2024-11-08 17:13:43.851055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:07.217 [2024-11-08 17:13:43.851116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.217 [2024-11-08 17:13:43.859095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.217 [2024-11-08 17:13:43.861094] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:07.217 [2024-11-08 17:13:43.861139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:07.217 [2024-11-08 17:13:43.861148] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:07.217 [2024-11-08 17:13:43.861161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:07.217 [2024-11-08 17:13:43.861167] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:07.217 [2024-11-08 17:13:43.861176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.217 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.217 "name": "Existed_Raid", 00:26:07.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.217 "strip_size_kb": 64, 00:26:07.217 "state": "configuring", 00:26:07.217 "raid_level": "raid5f", 00:26:07.217 "superblock": false, 00:26:07.217 "num_base_bdevs": 4, 00:26:07.217 "num_base_bdevs_discovered": 1, 00:26:07.217 "num_base_bdevs_operational": 4, 00:26:07.217 "base_bdevs_list": [ 00:26:07.217 { 00:26:07.217 "name": "BaseBdev1", 00:26:07.217 "uuid": "3de33940-e529-4788-9fa2-851fe403b741", 00:26:07.217 "is_configured": true, 00:26:07.217 "data_offset": 0, 00:26:07.217 "data_size": 65536 00:26:07.217 }, 00:26:07.217 { 00:26:07.217 "name": "BaseBdev2", 00:26:07.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.217 "is_configured": false, 00:26:07.217 "data_offset": 0, 00:26:07.217 "data_size": 0 00:26:07.218 }, 00:26:07.218 { 00:26:07.218 "name": "BaseBdev3", 00:26:07.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.218 "is_configured": false, 00:26:07.218 "data_offset": 0, 00:26:07.218 "data_size": 0 00:26:07.218 }, 00:26:07.218 { 00:26:07.218 "name": "BaseBdev4", 00:26:07.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.218 "is_configured": false, 00:26:07.218 "data_offset": 0, 00:26:07.218 "data_size": 0 00:26:07.218 } 00:26:07.218 ] 00:26:07.218 }' 00:26:07.218 17:13:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.218 17:13:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.785 [2024-11-08 17:13:44.224117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:07.785 BaseBdev2 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.785 [ 00:26:07.785 { 00:26:07.785 "name": "BaseBdev2", 00:26:07.785 "aliases": [ 00:26:07.785 "86b49f2c-f253-423e-b50d-c8eb1ff746e9" 00:26:07.785 ], 00:26:07.785 "product_name": "Malloc disk", 00:26:07.785 "block_size": 512, 00:26:07.785 "num_blocks": 65536, 00:26:07.785 "uuid": "86b49f2c-f253-423e-b50d-c8eb1ff746e9", 00:26:07.785 "assigned_rate_limits": { 00:26:07.785 "rw_ios_per_sec": 0, 00:26:07.785 "rw_mbytes_per_sec": 0, 00:26:07.785 "r_mbytes_per_sec": 0, 00:26:07.785 "w_mbytes_per_sec": 0 00:26:07.785 }, 00:26:07.785 "claimed": true, 00:26:07.785 "claim_type": "exclusive_write", 00:26:07.785 "zoned": false, 00:26:07.785 "supported_io_types": { 00:26:07.785 "read": true, 00:26:07.785 "write": true, 00:26:07.785 "unmap": true, 00:26:07.785 "flush": true, 00:26:07.785 "reset": true, 00:26:07.785 "nvme_admin": false, 00:26:07.785 "nvme_io": false, 00:26:07.785 "nvme_io_md": false, 00:26:07.785 "write_zeroes": true, 00:26:07.785 "zcopy": true, 00:26:07.785 "get_zone_info": false, 00:26:07.785 "zone_management": false, 00:26:07.785 "zone_append": false, 00:26:07.785 "compare": false, 00:26:07.785 "compare_and_write": false, 00:26:07.785 "abort": true, 00:26:07.785 "seek_hole": false, 00:26:07.785 "seek_data": false, 00:26:07.785 "copy": true, 00:26:07.785 "nvme_iov_md": false 00:26:07.785 }, 00:26:07.785 "memory_domains": [ 00:26:07.785 { 00:26:07.785 "dma_device_id": "system", 00:26:07.785 "dma_device_type": 1 00:26:07.785 }, 00:26:07.785 { 00:26:07.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.785 "dma_device_type": 2 00:26:07.785 } 00:26:07.785 ], 00:26:07.785 "driver_specific": {} 00:26:07.785 } 00:26:07.785 ] 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.785 "name": "Existed_Raid", 00:26:07.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.785 "strip_size_kb": 64, 00:26:07.785 "state": "configuring", 00:26:07.785 "raid_level": "raid5f", 00:26:07.785 "superblock": false, 00:26:07.785 "num_base_bdevs": 4, 00:26:07.785 "num_base_bdevs_discovered": 2, 00:26:07.785 "num_base_bdevs_operational": 4, 00:26:07.785 "base_bdevs_list": [ 00:26:07.785 { 00:26:07.785 "name": "BaseBdev1", 00:26:07.785 "uuid": "3de33940-e529-4788-9fa2-851fe403b741", 00:26:07.785 "is_configured": true, 00:26:07.785 "data_offset": 0, 00:26:07.785 "data_size": 65536 00:26:07.785 }, 00:26:07.785 { 00:26:07.785 "name": "BaseBdev2", 00:26:07.785 "uuid": "86b49f2c-f253-423e-b50d-c8eb1ff746e9", 00:26:07.785 "is_configured": true, 00:26:07.785 "data_offset": 0, 00:26:07.785 "data_size": 65536 00:26:07.785 }, 00:26:07.785 { 00:26:07.785 "name": "BaseBdev3", 00:26:07.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.785 "is_configured": false, 00:26:07.785 "data_offset": 0, 00:26:07.785 "data_size": 0 00:26:07.785 }, 00:26:07.785 { 00:26:07.785 "name": "BaseBdev4", 00:26:07.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.785 "is_configured": false, 00:26:07.785 "data_offset": 0, 00:26:07.785 "data_size": 0 00:26:07.785 } 00:26:07.785 ] 00:26:07.785 }' 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.785 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.046 [2024-11-08 17:13:44.626229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:08.046 BaseBdev3 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.046 [ 00:26:08.046 { 00:26:08.046 "name": "BaseBdev3", 00:26:08.046 "aliases": [ 00:26:08.046 "53838458-a9c3-412a-a772-aa68b4aac4b5" 00:26:08.046 ], 00:26:08.046 "product_name": "Malloc disk", 00:26:08.046 "block_size": 512, 00:26:08.046 "num_blocks": 65536, 00:26:08.046 "uuid": "53838458-a9c3-412a-a772-aa68b4aac4b5", 00:26:08.046 "assigned_rate_limits": { 00:26:08.046 "rw_ios_per_sec": 0, 00:26:08.046 "rw_mbytes_per_sec": 0, 00:26:08.046 "r_mbytes_per_sec": 0, 00:26:08.046 "w_mbytes_per_sec": 0 00:26:08.046 }, 00:26:08.046 "claimed": true, 00:26:08.046 "claim_type": "exclusive_write", 00:26:08.046 "zoned": false, 00:26:08.046 "supported_io_types": { 00:26:08.046 "read": true, 00:26:08.046 "write": true, 00:26:08.046 "unmap": true, 00:26:08.046 "flush": true, 00:26:08.046 "reset": true, 00:26:08.046 "nvme_admin": false, 00:26:08.046 "nvme_io": false, 00:26:08.046 "nvme_io_md": false, 00:26:08.046 "write_zeroes": true, 00:26:08.046 "zcopy": true, 00:26:08.046 "get_zone_info": false, 00:26:08.046 "zone_management": false, 00:26:08.046 "zone_append": false, 00:26:08.046 "compare": false, 00:26:08.046 "compare_and_write": false, 00:26:08.046 "abort": true, 00:26:08.046 "seek_hole": false, 00:26:08.046 "seek_data": false, 00:26:08.046 "copy": true, 00:26:08.046 "nvme_iov_md": false 00:26:08.046 }, 00:26:08.046 "memory_domains": [ 00:26:08.046 { 00:26:08.046 "dma_device_id": "system", 00:26:08.046 "dma_device_type": 1 00:26:08.046 }, 00:26:08.046 { 00:26:08.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.046 "dma_device_type": 2 00:26:08.046 } 00:26:08.046 ], 00:26:08.046 "driver_specific": {} 00:26:08.046 } 00:26:08.046 ] 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.046 "name": "Existed_Raid", 00:26:08.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.046 "strip_size_kb": 64, 00:26:08.046 "state": "configuring", 00:26:08.046 "raid_level": "raid5f", 00:26:08.046 "superblock": false, 00:26:08.046 "num_base_bdevs": 4, 00:26:08.046 "num_base_bdevs_discovered": 3, 00:26:08.046 "num_base_bdevs_operational": 4, 00:26:08.046 "base_bdevs_list": [ 00:26:08.046 { 00:26:08.046 "name": "BaseBdev1", 00:26:08.046 "uuid": "3de33940-e529-4788-9fa2-851fe403b741", 00:26:08.046 "is_configured": true, 00:26:08.046 "data_offset": 0, 00:26:08.046 "data_size": 65536 00:26:08.046 }, 00:26:08.046 { 00:26:08.046 "name": "BaseBdev2", 00:26:08.046 "uuid": "86b49f2c-f253-423e-b50d-c8eb1ff746e9", 00:26:08.046 "is_configured": true, 00:26:08.046 "data_offset": 0, 00:26:08.046 "data_size": 65536 00:26:08.046 }, 00:26:08.046 { 00:26:08.046 "name": "BaseBdev3", 00:26:08.046 "uuid": "53838458-a9c3-412a-a772-aa68b4aac4b5", 00:26:08.046 "is_configured": true, 00:26:08.046 "data_offset": 0, 00:26:08.046 "data_size": 65536 00:26:08.046 }, 00:26:08.046 { 00:26:08.046 "name": "BaseBdev4", 00:26:08.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.046 "is_configured": false, 00:26:08.046 "data_offset": 0, 00:26:08.046 "data_size": 0 00:26:08.046 } 00:26:08.046 ] 00:26:08.046 }' 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.046 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.304 [2024-11-08 17:13:44.991493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:08.304 [2024-11-08 17:13:44.991742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:08.304 [2024-11-08 17:13:44.991778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:08.304 [2024-11-08 17:13:44.992076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:08.304 [2024-11-08 17:13:44.997113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:08.304 [2024-11-08 17:13:44.997224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:08.304 [2024-11-08 17:13:44.997517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.304 BaseBdev4 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.304 17:13:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.304 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.304 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:08.304 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.304 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.562 [ 00:26:08.562 { 00:26:08.562 "name": "BaseBdev4", 00:26:08.562 "aliases": [ 00:26:08.562 "fb632faa-af4b-4e35-b402-8ce720cae692" 00:26:08.562 ], 00:26:08.562 "product_name": "Malloc disk", 00:26:08.562 "block_size": 512, 00:26:08.562 "num_blocks": 65536, 00:26:08.562 "uuid": "fb632faa-af4b-4e35-b402-8ce720cae692", 00:26:08.562 "assigned_rate_limits": { 00:26:08.562 "rw_ios_per_sec": 0, 00:26:08.562 "rw_mbytes_per_sec": 0, 00:26:08.562 "r_mbytes_per_sec": 0, 00:26:08.562 "w_mbytes_per_sec": 0 00:26:08.562 }, 00:26:08.562 "claimed": true, 00:26:08.562 "claim_type": "exclusive_write", 00:26:08.562 "zoned": false, 00:26:08.562 "supported_io_types": { 00:26:08.562 "read": true, 00:26:08.562 "write": true, 00:26:08.562 "unmap": true, 00:26:08.562 "flush": true, 00:26:08.562 "reset": true, 00:26:08.562 "nvme_admin": false, 00:26:08.562 "nvme_io": false, 00:26:08.562 "nvme_io_md": false, 00:26:08.562 "write_zeroes": true, 00:26:08.562 "zcopy": true, 00:26:08.562 "get_zone_info": false, 00:26:08.562 "zone_management": false, 00:26:08.562 "zone_append": false, 00:26:08.562 "compare": false, 00:26:08.562 "compare_and_write": false, 00:26:08.562 "abort": true, 00:26:08.562 "seek_hole": false, 00:26:08.562 "seek_data": false, 00:26:08.562 "copy": true, 00:26:08.562 "nvme_iov_md": false 00:26:08.562 }, 00:26:08.562 "memory_domains": [ 00:26:08.562 { 00:26:08.562 "dma_device_id": "system", 00:26:08.562 "dma_device_type": 1 00:26:08.562 }, 00:26:08.562 { 00:26:08.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.562 "dma_device_type": 2 00:26:08.562 } 00:26:08.562 ], 00:26:08.562 "driver_specific": {} 00:26:08.562 } 00:26:08.562 ] 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.562 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.562 "name": "Existed_Raid", 00:26:08.562 "uuid": "b2c775c9-6d16-4c03-a9cf-eb5465e951b0", 00:26:08.562 "strip_size_kb": 64, 00:26:08.562 "state": "online", 00:26:08.562 "raid_level": "raid5f", 00:26:08.562 "superblock": false, 00:26:08.562 "num_base_bdevs": 4, 00:26:08.562 "num_base_bdevs_discovered": 4, 00:26:08.562 "num_base_bdevs_operational": 4, 00:26:08.562 "base_bdevs_list": [ 00:26:08.562 { 00:26:08.562 "name": "BaseBdev1", 00:26:08.563 "uuid": "3de33940-e529-4788-9fa2-851fe403b741", 00:26:08.563 "is_configured": true, 00:26:08.563 "data_offset": 0, 00:26:08.563 "data_size": 65536 00:26:08.563 }, 00:26:08.563 { 00:26:08.563 "name": "BaseBdev2", 00:26:08.563 "uuid": "86b49f2c-f253-423e-b50d-c8eb1ff746e9", 00:26:08.563 "is_configured": true, 00:26:08.563 "data_offset": 0, 00:26:08.563 "data_size": 65536 00:26:08.563 }, 00:26:08.563 { 00:26:08.563 "name": "BaseBdev3", 00:26:08.563 "uuid": "53838458-a9c3-412a-a772-aa68b4aac4b5", 00:26:08.563 "is_configured": true, 00:26:08.563 "data_offset": 0, 00:26:08.563 "data_size": 65536 00:26:08.563 }, 00:26:08.563 { 00:26:08.563 "name": "BaseBdev4", 00:26:08.563 "uuid": "fb632faa-af4b-4e35-b402-8ce720cae692", 00:26:08.563 "is_configured": true, 00:26:08.563 "data_offset": 0, 00:26:08.563 "data_size": 65536 00:26:08.563 } 00:26:08.563 ] 00:26:08.563 }' 00:26:08.563 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.563 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.821 [2024-11-08 17:13:45.359384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.821 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:08.821 "name": "Existed_Raid", 00:26:08.821 "aliases": [ 00:26:08.821 "b2c775c9-6d16-4c03-a9cf-eb5465e951b0" 00:26:08.821 ], 00:26:08.821 "product_name": "Raid Volume", 00:26:08.821 "block_size": 512, 00:26:08.821 "num_blocks": 196608, 00:26:08.821 "uuid": "b2c775c9-6d16-4c03-a9cf-eb5465e951b0", 00:26:08.821 "assigned_rate_limits": { 00:26:08.821 "rw_ios_per_sec": 0, 00:26:08.821 "rw_mbytes_per_sec": 0, 00:26:08.821 "r_mbytes_per_sec": 0, 00:26:08.821 "w_mbytes_per_sec": 0 00:26:08.821 }, 00:26:08.821 "claimed": false, 00:26:08.821 "zoned": false, 00:26:08.821 "supported_io_types": { 00:26:08.821 "read": true, 00:26:08.821 "write": true, 00:26:08.821 "unmap": false, 00:26:08.821 "flush": false, 00:26:08.821 "reset": true, 00:26:08.821 "nvme_admin": false, 00:26:08.821 "nvme_io": false, 00:26:08.821 "nvme_io_md": false, 00:26:08.821 "write_zeroes": true, 00:26:08.821 "zcopy": false, 00:26:08.821 "get_zone_info": false, 00:26:08.821 "zone_management": false, 00:26:08.821 "zone_append": false, 00:26:08.821 "compare": false, 00:26:08.821 "compare_and_write": false, 00:26:08.821 "abort": false, 00:26:08.821 "seek_hole": false, 00:26:08.821 "seek_data": false, 00:26:08.821 "copy": false, 00:26:08.821 "nvme_iov_md": false 00:26:08.821 }, 00:26:08.821 "driver_specific": { 00:26:08.821 "raid": { 00:26:08.821 "uuid": "b2c775c9-6d16-4c03-a9cf-eb5465e951b0", 00:26:08.821 "strip_size_kb": 64, 00:26:08.821 "state": "online", 00:26:08.821 "raid_level": "raid5f", 00:26:08.821 "superblock": false, 00:26:08.821 "num_base_bdevs": 4, 00:26:08.821 "num_base_bdevs_discovered": 4, 00:26:08.821 "num_base_bdevs_operational": 4, 00:26:08.821 "base_bdevs_list": [ 00:26:08.821 { 00:26:08.821 "name": "BaseBdev1", 00:26:08.821 "uuid": "3de33940-e529-4788-9fa2-851fe403b741", 00:26:08.821 "is_configured": true, 00:26:08.821 "data_offset": 0, 00:26:08.821 "data_size": 65536 00:26:08.821 }, 00:26:08.821 { 00:26:08.821 "name": "BaseBdev2", 00:26:08.821 "uuid": "86b49f2c-f253-423e-b50d-c8eb1ff746e9", 00:26:08.821 "is_configured": true, 00:26:08.821 "data_offset": 0, 00:26:08.821 "data_size": 65536 00:26:08.821 }, 00:26:08.821 { 00:26:08.821 "name": "BaseBdev3", 00:26:08.821 "uuid": "53838458-a9c3-412a-a772-aa68b4aac4b5", 00:26:08.821 "is_configured": true, 00:26:08.821 "data_offset": 0, 00:26:08.821 "data_size": 65536 00:26:08.821 }, 00:26:08.821 { 00:26:08.821 "name": "BaseBdev4", 00:26:08.821 "uuid": "fb632faa-af4b-4e35-b402-8ce720cae692", 00:26:08.821 "is_configured": true, 00:26:08.821 "data_offset": 0, 00:26:08.821 "data_size": 65536 00:26:08.821 } 00:26:08.821 ] 00:26:08.821 } 00:26:08.822 } 00:26:08.822 }' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:08.822 BaseBdev2 00:26:08.822 BaseBdev3 00:26:08.822 BaseBdev4' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.822 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.080 [2024-11-08 17:13:45.579261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.080 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.080 "name": "Existed_Raid", 00:26:09.080 "uuid": "b2c775c9-6d16-4c03-a9cf-eb5465e951b0", 00:26:09.080 "strip_size_kb": 64, 00:26:09.080 "state": "online", 00:26:09.080 "raid_level": "raid5f", 00:26:09.080 "superblock": false, 00:26:09.080 "num_base_bdevs": 4, 00:26:09.080 "num_base_bdevs_discovered": 3, 00:26:09.080 "num_base_bdevs_operational": 3, 00:26:09.080 "base_bdevs_list": [ 00:26:09.080 { 00:26:09.080 "name": null, 00:26:09.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.080 "is_configured": false, 00:26:09.080 "data_offset": 0, 00:26:09.080 "data_size": 65536 00:26:09.080 }, 00:26:09.080 { 00:26:09.080 "name": "BaseBdev2", 00:26:09.080 "uuid": "86b49f2c-f253-423e-b50d-c8eb1ff746e9", 00:26:09.080 "is_configured": true, 00:26:09.081 "data_offset": 0, 00:26:09.081 "data_size": 65536 00:26:09.081 }, 00:26:09.081 { 00:26:09.081 "name": "BaseBdev3", 00:26:09.081 "uuid": "53838458-a9c3-412a-a772-aa68b4aac4b5", 00:26:09.081 "is_configured": true, 00:26:09.081 "data_offset": 0, 00:26:09.081 "data_size": 65536 00:26:09.081 }, 00:26:09.081 { 00:26:09.081 "name": "BaseBdev4", 00:26:09.081 "uuid": "fb632faa-af4b-4e35-b402-8ce720cae692", 00:26:09.081 "is_configured": true, 00:26:09.081 "data_offset": 0, 00:26:09.081 "data_size": 65536 00:26:09.081 } 00:26:09.081 ] 00:26:09.081 }' 00:26:09.081 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.081 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.338 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:09.338 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:09.338 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.338 17:13:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:09.338 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.338 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.338 17:13:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.338 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:09.338 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:09.338 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:09.338 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.338 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.338 [2024-11-08 17:13:46.026717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:09.338 [2024-11-08 17:13:46.026834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:09.597 [2024-11-08 17:13:46.088327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.597 [2024-11-08 17:13:46.128399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.597 [2024-11-08 17:13:46.231961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:09.597 [2024-11-08 17:13:46.232014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.597 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.855 BaseBdev2 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.855 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.855 [ 00:26:09.855 { 00:26:09.855 "name": "BaseBdev2", 00:26:09.855 "aliases": [ 00:26:09.855 "8663c69f-7c6c-4785-89f8-f2162ae9f210" 00:26:09.855 ], 00:26:09.855 "product_name": "Malloc disk", 00:26:09.855 "block_size": 512, 00:26:09.855 "num_blocks": 65536, 00:26:09.855 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:09.855 "assigned_rate_limits": { 00:26:09.855 "rw_ios_per_sec": 0, 00:26:09.855 "rw_mbytes_per_sec": 0, 00:26:09.855 "r_mbytes_per_sec": 0, 00:26:09.855 "w_mbytes_per_sec": 0 00:26:09.855 }, 00:26:09.856 "claimed": false, 00:26:09.856 "zoned": false, 00:26:09.856 "supported_io_types": { 00:26:09.856 "read": true, 00:26:09.856 "write": true, 00:26:09.856 "unmap": true, 00:26:09.856 "flush": true, 00:26:09.856 "reset": true, 00:26:09.856 "nvme_admin": false, 00:26:09.856 "nvme_io": false, 00:26:09.856 "nvme_io_md": false, 00:26:09.856 "write_zeroes": true, 00:26:09.856 "zcopy": true, 00:26:09.856 "get_zone_info": false, 00:26:09.856 "zone_management": false, 00:26:09.856 "zone_append": false, 00:26:09.856 "compare": false, 00:26:09.856 "compare_and_write": false, 00:26:09.856 "abort": true, 00:26:09.856 "seek_hole": false, 00:26:09.856 "seek_data": false, 00:26:09.856 "copy": true, 00:26:09.856 "nvme_iov_md": false 00:26:09.856 }, 00:26:09.856 "memory_domains": [ 00:26:09.856 { 00:26:09.856 "dma_device_id": "system", 00:26:09.856 "dma_device_type": 1 00:26:09.856 }, 00:26:09.856 { 00:26:09.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.856 "dma_device_type": 2 00:26:09.856 } 00:26:09.856 ], 00:26:09.856 "driver_specific": {} 00:26:09.856 } 00:26:09.856 ] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.856 BaseBdev3 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.856 [ 00:26:09.856 { 00:26:09.856 "name": "BaseBdev3", 00:26:09.856 "aliases": [ 00:26:09.856 "aef802c3-e7f0-4ed9-9dc6-0230d3f08872" 00:26:09.856 ], 00:26:09.856 "product_name": "Malloc disk", 00:26:09.856 "block_size": 512, 00:26:09.856 "num_blocks": 65536, 00:26:09.856 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:09.856 "assigned_rate_limits": { 00:26:09.856 "rw_ios_per_sec": 0, 00:26:09.856 "rw_mbytes_per_sec": 0, 00:26:09.856 "r_mbytes_per_sec": 0, 00:26:09.856 "w_mbytes_per_sec": 0 00:26:09.856 }, 00:26:09.856 "claimed": false, 00:26:09.856 "zoned": false, 00:26:09.856 "supported_io_types": { 00:26:09.856 "read": true, 00:26:09.856 "write": true, 00:26:09.856 "unmap": true, 00:26:09.856 "flush": true, 00:26:09.856 "reset": true, 00:26:09.856 "nvme_admin": false, 00:26:09.856 "nvme_io": false, 00:26:09.856 "nvme_io_md": false, 00:26:09.856 "write_zeroes": true, 00:26:09.856 "zcopy": true, 00:26:09.856 "get_zone_info": false, 00:26:09.856 "zone_management": false, 00:26:09.856 "zone_append": false, 00:26:09.856 "compare": false, 00:26:09.856 "compare_and_write": false, 00:26:09.856 "abort": true, 00:26:09.856 "seek_hole": false, 00:26:09.856 "seek_data": false, 00:26:09.856 "copy": true, 00:26:09.856 "nvme_iov_md": false 00:26:09.856 }, 00:26:09.856 "memory_domains": [ 00:26:09.856 { 00:26:09.856 "dma_device_id": "system", 00:26:09.856 "dma_device_type": 1 00:26:09.856 }, 00:26:09.856 { 00:26:09.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.856 "dma_device_type": 2 00:26:09.856 } 00:26:09.856 ], 00:26:09.856 "driver_specific": {} 00:26:09.856 } 00:26:09.856 ] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.856 BaseBdev4 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.856 [ 00:26:09.856 { 00:26:09.856 "name": "BaseBdev4", 00:26:09.856 "aliases": [ 00:26:09.856 "7f2ca08a-5e37-4735-b6db-966c587fa845" 00:26:09.856 ], 00:26:09.856 "product_name": "Malloc disk", 00:26:09.856 "block_size": 512, 00:26:09.856 "num_blocks": 65536, 00:26:09.856 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:09.856 "assigned_rate_limits": { 00:26:09.856 "rw_ios_per_sec": 0, 00:26:09.856 "rw_mbytes_per_sec": 0, 00:26:09.856 "r_mbytes_per_sec": 0, 00:26:09.856 "w_mbytes_per_sec": 0 00:26:09.856 }, 00:26:09.856 "claimed": false, 00:26:09.856 "zoned": false, 00:26:09.856 "supported_io_types": { 00:26:09.856 "read": true, 00:26:09.856 "write": true, 00:26:09.856 "unmap": true, 00:26:09.856 "flush": true, 00:26:09.856 "reset": true, 00:26:09.856 "nvme_admin": false, 00:26:09.856 "nvme_io": false, 00:26:09.856 "nvme_io_md": false, 00:26:09.856 "write_zeroes": true, 00:26:09.856 "zcopy": true, 00:26:09.856 "get_zone_info": false, 00:26:09.856 "zone_management": false, 00:26:09.856 "zone_append": false, 00:26:09.856 "compare": false, 00:26:09.856 "compare_and_write": false, 00:26:09.856 "abort": true, 00:26:09.856 "seek_hole": false, 00:26:09.856 "seek_data": false, 00:26:09.856 "copy": true, 00:26:09.856 "nvme_iov_md": false 00:26:09.856 }, 00:26:09.856 "memory_domains": [ 00:26:09.856 { 00:26:09.856 "dma_device_id": "system", 00:26:09.856 "dma_device_type": 1 00:26:09.856 }, 00:26:09.856 { 00:26:09.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.856 "dma_device_type": 2 00:26:09.856 } 00:26:09.856 ], 00:26:09.856 "driver_specific": {} 00:26:09.856 } 00:26:09.856 ] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.856 [2024-11-08 17:13:46.520142] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:09.856 [2024-11-08 17:13:46.520327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:09.856 [2024-11-08 17:13:46.520447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:09.856 [2024-11-08 17:13:46.523227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.856 [2024-11-08 17:13:46.523421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:09.856 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.857 "name": "Existed_Raid", 00:26:09.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.857 "strip_size_kb": 64, 00:26:09.857 "state": "configuring", 00:26:09.857 "raid_level": "raid5f", 00:26:09.857 "superblock": false, 00:26:09.857 "num_base_bdevs": 4, 00:26:09.857 "num_base_bdevs_discovered": 3, 00:26:09.857 "num_base_bdevs_operational": 4, 00:26:09.857 "base_bdevs_list": [ 00:26:09.857 { 00:26:09.857 "name": "BaseBdev1", 00:26:09.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.857 "is_configured": false, 00:26:09.857 "data_offset": 0, 00:26:09.857 "data_size": 0 00:26:09.857 }, 00:26:09.857 { 00:26:09.857 "name": "BaseBdev2", 00:26:09.857 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:09.857 "is_configured": true, 00:26:09.857 "data_offset": 0, 00:26:09.857 "data_size": 65536 00:26:09.857 }, 00:26:09.857 { 00:26:09.857 "name": "BaseBdev3", 00:26:09.857 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:09.857 "is_configured": true, 00:26:09.857 "data_offset": 0, 00:26:09.857 "data_size": 65536 00:26:09.857 }, 00:26:09.857 { 00:26:09.857 "name": "BaseBdev4", 00:26:09.857 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:09.857 "is_configured": true, 00:26:09.857 "data_offset": 0, 00:26:09.857 "data_size": 65536 00:26:09.857 } 00:26:09.857 ] 00:26:09.857 }' 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.857 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.198 [2024-11-08 17:13:46.884195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.198 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.456 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.456 "name": "Existed_Raid", 00:26:10.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.456 "strip_size_kb": 64, 00:26:10.456 "state": "configuring", 00:26:10.456 "raid_level": "raid5f", 00:26:10.456 "superblock": false, 00:26:10.456 "num_base_bdevs": 4, 00:26:10.456 "num_base_bdevs_discovered": 2, 00:26:10.456 "num_base_bdevs_operational": 4, 00:26:10.456 "base_bdevs_list": [ 00:26:10.456 { 00:26:10.456 "name": "BaseBdev1", 00:26:10.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.456 "is_configured": false, 00:26:10.456 "data_offset": 0, 00:26:10.456 "data_size": 0 00:26:10.456 }, 00:26:10.456 { 00:26:10.456 "name": null, 00:26:10.456 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:10.456 "is_configured": false, 00:26:10.456 "data_offset": 0, 00:26:10.456 "data_size": 65536 00:26:10.456 }, 00:26:10.456 { 00:26:10.456 "name": "BaseBdev3", 00:26:10.456 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:10.456 "is_configured": true, 00:26:10.456 "data_offset": 0, 00:26:10.456 "data_size": 65536 00:26:10.456 }, 00:26:10.456 { 00:26:10.456 "name": "BaseBdev4", 00:26:10.456 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:10.456 "is_configured": true, 00:26:10.456 "data_offset": 0, 00:26:10.456 "data_size": 65536 00:26:10.456 } 00:26:10.456 ] 00:26:10.456 }' 00:26:10.456 17:13:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.456 17:13:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.714 [2024-11-08 17:13:47.277191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:10.714 BaseBdev1 00:26:10.714 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.715 [ 00:26:10.715 { 00:26:10.715 "name": "BaseBdev1", 00:26:10.715 "aliases": [ 00:26:10.715 "b166a958-0179-4141-a844-cd7431125df0" 00:26:10.715 ], 00:26:10.715 "product_name": "Malloc disk", 00:26:10.715 "block_size": 512, 00:26:10.715 "num_blocks": 65536, 00:26:10.715 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:10.715 "assigned_rate_limits": { 00:26:10.715 "rw_ios_per_sec": 0, 00:26:10.715 "rw_mbytes_per_sec": 0, 00:26:10.715 "r_mbytes_per_sec": 0, 00:26:10.715 "w_mbytes_per_sec": 0 00:26:10.715 }, 00:26:10.715 "claimed": true, 00:26:10.715 "claim_type": "exclusive_write", 00:26:10.715 "zoned": false, 00:26:10.715 "supported_io_types": { 00:26:10.715 "read": true, 00:26:10.715 "write": true, 00:26:10.715 "unmap": true, 00:26:10.715 "flush": true, 00:26:10.715 "reset": true, 00:26:10.715 "nvme_admin": false, 00:26:10.715 "nvme_io": false, 00:26:10.715 "nvme_io_md": false, 00:26:10.715 "write_zeroes": true, 00:26:10.715 "zcopy": true, 00:26:10.715 "get_zone_info": false, 00:26:10.715 "zone_management": false, 00:26:10.715 "zone_append": false, 00:26:10.715 "compare": false, 00:26:10.715 "compare_and_write": false, 00:26:10.715 "abort": true, 00:26:10.715 "seek_hole": false, 00:26:10.715 "seek_data": false, 00:26:10.715 "copy": true, 00:26:10.715 "nvme_iov_md": false 00:26:10.715 }, 00:26:10.715 "memory_domains": [ 00:26:10.715 { 00:26:10.715 "dma_device_id": "system", 00:26:10.715 "dma_device_type": 1 00:26:10.715 }, 00:26:10.715 { 00:26:10.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.715 "dma_device_type": 2 00:26:10.715 } 00:26:10.715 ], 00:26:10.715 "driver_specific": {} 00:26:10.715 } 00:26:10.715 ] 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.715 "name": "Existed_Raid", 00:26:10.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.715 "strip_size_kb": 64, 00:26:10.715 "state": "configuring", 00:26:10.715 "raid_level": "raid5f", 00:26:10.715 "superblock": false, 00:26:10.715 "num_base_bdevs": 4, 00:26:10.715 "num_base_bdevs_discovered": 3, 00:26:10.715 "num_base_bdevs_operational": 4, 00:26:10.715 "base_bdevs_list": [ 00:26:10.715 { 00:26:10.715 "name": "BaseBdev1", 00:26:10.715 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:10.715 "is_configured": true, 00:26:10.715 "data_offset": 0, 00:26:10.715 "data_size": 65536 00:26:10.715 }, 00:26:10.715 { 00:26:10.715 "name": null, 00:26:10.715 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:10.715 "is_configured": false, 00:26:10.715 "data_offset": 0, 00:26:10.715 "data_size": 65536 00:26:10.715 }, 00:26:10.715 { 00:26:10.715 "name": "BaseBdev3", 00:26:10.715 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:10.715 "is_configured": true, 00:26:10.715 "data_offset": 0, 00:26:10.715 "data_size": 65536 00:26:10.715 }, 00:26:10.715 { 00:26:10.715 "name": "BaseBdev4", 00:26:10.715 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:10.715 "is_configured": true, 00:26:10.715 "data_offset": 0, 00:26:10.715 "data_size": 65536 00:26:10.715 } 00:26:10.715 ] 00:26:10.715 }' 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.715 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.974 [2024-11-08 17:13:47.677370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.974 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.232 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.232 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.232 "name": "Existed_Raid", 00:26:11.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.232 "strip_size_kb": 64, 00:26:11.232 "state": "configuring", 00:26:11.232 "raid_level": "raid5f", 00:26:11.232 "superblock": false, 00:26:11.232 "num_base_bdevs": 4, 00:26:11.233 "num_base_bdevs_discovered": 2, 00:26:11.233 "num_base_bdevs_operational": 4, 00:26:11.233 "base_bdevs_list": [ 00:26:11.233 { 00:26:11.233 "name": "BaseBdev1", 00:26:11.233 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:11.233 "is_configured": true, 00:26:11.233 "data_offset": 0, 00:26:11.233 "data_size": 65536 00:26:11.233 }, 00:26:11.233 { 00:26:11.233 "name": null, 00:26:11.233 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:11.233 "is_configured": false, 00:26:11.233 "data_offset": 0, 00:26:11.233 "data_size": 65536 00:26:11.233 }, 00:26:11.233 { 00:26:11.233 "name": null, 00:26:11.233 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:11.233 "is_configured": false, 00:26:11.233 "data_offset": 0, 00:26:11.233 "data_size": 65536 00:26:11.233 }, 00:26:11.233 { 00:26:11.233 "name": "BaseBdev4", 00:26:11.233 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:11.233 "is_configured": true, 00:26:11.233 "data_offset": 0, 00:26:11.233 "data_size": 65536 00:26:11.233 } 00:26:11.233 ] 00:26:11.233 }' 00:26:11.233 17:13:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.233 17:13:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.491 [2024-11-08 17:13:48.037456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.491 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.491 "name": "Existed_Raid", 00:26:11.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.491 "strip_size_kb": 64, 00:26:11.491 "state": "configuring", 00:26:11.491 "raid_level": "raid5f", 00:26:11.491 "superblock": false, 00:26:11.491 "num_base_bdevs": 4, 00:26:11.491 "num_base_bdevs_discovered": 3, 00:26:11.492 "num_base_bdevs_operational": 4, 00:26:11.492 "base_bdevs_list": [ 00:26:11.492 { 00:26:11.492 "name": "BaseBdev1", 00:26:11.492 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:11.492 "is_configured": true, 00:26:11.492 "data_offset": 0, 00:26:11.492 "data_size": 65536 00:26:11.492 }, 00:26:11.492 { 00:26:11.492 "name": null, 00:26:11.492 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:11.492 "is_configured": false, 00:26:11.492 "data_offset": 0, 00:26:11.492 "data_size": 65536 00:26:11.492 }, 00:26:11.492 { 00:26:11.492 "name": "BaseBdev3", 00:26:11.492 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:11.492 "is_configured": true, 00:26:11.492 "data_offset": 0, 00:26:11.492 "data_size": 65536 00:26:11.492 }, 00:26:11.492 { 00:26:11.492 "name": "BaseBdev4", 00:26:11.492 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:11.492 "is_configured": true, 00:26:11.492 "data_offset": 0, 00:26:11.492 "data_size": 65536 00:26:11.492 } 00:26:11.492 ] 00:26:11.492 }' 00:26:11.492 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.492 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.750 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.750 [2024-11-08 17:13:48.417580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:12.008 "name": "Existed_Raid", 00:26:12.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.008 "strip_size_kb": 64, 00:26:12.008 "state": "configuring", 00:26:12.008 "raid_level": "raid5f", 00:26:12.008 "superblock": false, 00:26:12.008 "num_base_bdevs": 4, 00:26:12.008 "num_base_bdevs_discovered": 2, 00:26:12.008 "num_base_bdevs_operational": 4, 00:26:12.008 "base_bdevs_list": [ 00:26:12.008 { 00:26:12.008 "name": null, 00:26:12.008 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:12.008 "is_configured": false, 00:26:12.008 "data_offset": 0, 00:26:12.008 "data_size": 65536 00:26:12.008 }, 00:26:12.008 { 00:26:12.008 "name": null, 00:26:12.008 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:12.008 "is_configured": false, 00:26:12.008 "data_offset": 0, 00:26:12.008 "data_size": 65536 00:26:12.008 }, 00:26:12.008 { 00:26:12.008 "name": "BaseBdev3", 00:26:12.008 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:12.008 "is_configured": true, 00:26:12.008 "data_offset": 0, 00:26:12.008 "data_size": 65536 00:26:12.008 }, 00:26:12.008 { 00:26:12.008 "name": "BaseBdev4", 00:26:12.008 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:12.008 "is_configured": true, 00:26:12.008 "data_offset": 0, 00:26:12.008 "data_size": 65536 00:26:12.008 } 00:26:12.008 ] 00:26:12.008 }' 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:12.008 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.266 [2024-11-08 17:13:48.840673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:12.266 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:12.267 "name": "Existed_Raid", 00:26:12.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.267 "strip_size_kb": 64, 00:26:12.267 "state": "configuring", 00:26:12.267 "raid_level": "raid5f", 00:26:12.267 "superblock": false, 00:26:12.267 "num_base_bdevs": 4, 00:26:12.267 "num_base_bdevs_discovered": 3, 00:26:12.267 "num_base_bdevs_operational": 4, 00:26:12.267 "base_bdevs_list": [ 00:26:12.267 { 00:26:12.267 "name": null, 00:26:12.267 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:12.267 "is_configured": false, 00:26:12.267 "data_offset": 0, 00:26:12.267 "data_size": 65536 00:26:12.267 }, 00:26:12.267 { 00:26:12.267 "name": "BaseBdev2", 00:26:12.267 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:12.267 "is_configured": true, 00:26:12.267 "data_offset": 0, 00:26:12.267 "data_size": 65536 00:26:12.267 }, 00:26:12.267 { 00:26:12.267 "name": "BaseBdev3", 00:26:12.267 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:12.267 "is_configured": true, 00:26:12.267 "data_offset": 0, 00:26:12.267 "data_size": 65536 00:26:12.267 }, 00:26:12.267 { 00:26:12.267 "name": "BaseBdev4", 00:26:12.267 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:12.267 "is_configured": true, 00:26:12.267 "data_offset": 0, 00:26:12.267 "data_size": 65536 00:26:12.267 } 00:26:12.267 ] 00:26:12.267 }' 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:12.267 17:13:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b166a958-0179-4141-a844-cd7431125df0 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.525 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.783 [2024-11-08 17:13:49.265383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:12.783 [2024-11-08 17:13:49.265443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:12.783 [2024-11-08 17:13:49.265451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:12.783 [2024-11-08 17:13:49.265719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:12.783 [2024-11-08 17:13:49.270648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:12.783 [2024-11-08 17:13:49.270672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:12.783 [2024-11-08 17:13:49.270945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.783 NewBaseBdev 00:26:12.783 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.784 [ 00:26:12.784 { 00:26:12.784 "name": "NewBaseBdev", 00:26:12.784 "aliases": [ 00:26:12.784 "b166a958-0179-4141-a844-cd7431125df0" 00:26:12.784 ], 00:26:12.784 "product_name": "Malloc disk", 00:26:12.784 "block_size": 512, 00:26:12.784 "num_blocks": 65536, 00:26:12.784 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:12.784 "assigned_rate_limits": { 00:26:12.784 "rw_ios_per_sec": 0, 00:26:12.784 "rw_mbytes_per_sec": 0, 00:26:12.784 "r_mbytes_per_sec": 0, 00:26:12.784 "w_mbytes_per_sec": 0 00:26:12.784 }, 00:26:12.784 "claimed": true, 00:26:12.784 "claim_type": "exclusive_write", 00:26:12.784 "zoned": false, 00:26:12.784 "supported_io_types": { 00:26:12.784 "read": true, 00:26:12.784 "write": true, 00:26:12.784 "unmap": true, 00:26:12.784 "flush": true, 00:26:12.784 "reset": true, 00:26:12.784 "nvme_admin": false, 00:26:12.784 "nvme_io": false, 00:26:12.784 "nvme_io_md": false, 00:26:12.784 "write_zeroes": true, 00:26:12.784 "zcopy": true, 00:26:12.784 "get_zone_info": false, 00:26:12.784 "zone_management": false, 00:26:12.784 "zone_append": false, 00:26:12.784 "compare": false, 00:26:12.784 "compare_and_write": false, 00:26:12.784 "abort": true, 00:26:12.784 "seek_hole": false, 00:26:12.784 "seek_data": false, 00:26:12.784 "copy": true, 00:26:12.784 "nvme_iov_md": false 00:26:12.784 }, 00:26:12.784 "memory_domains": [ 00:26:12.784 { 00:26:12.784 "dma_device_id": "system", 00:26:12.784 "dma_device_type": 1 00:26:12.784 }, 00:26:12.784 { 00:26:12.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.784 "dma_device_type": 2 00:26:12.784 } 00:26:12.784 ], 00:26:12.784 "driver_specific": {} 00:26:12.784 } 00:26:12.784 ] 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:12.784 "name": "Existed_Raid", 00:26:12.784 "uuid": "17e0ad85-46f7-4896-a8e9-a337df714307", 00:26:12.784 "strip_size_kb": 64, 00:26:12.784 "state": "online", 00:26:12.784 "raid_level": "raid5f", 00:26:12.784 "superblock": false, 00:26:12.784 "num_base_bdevs": 4, 00:26:12.784 "num_base_bdevs_discovered": 4, 00:26:12.784 "num_base_bdevs_operational": 4, 00:26:12.784 "base_bdevs_list": [ 00:26:12.784 { 00:26:12.784 "name": "NewBaseBdev", 00:26:12.784 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:12.784 "is_configured": true, 00:26:12.784 "data_offset": 0, 00:26:12.784 "data_size": 65536 00:26:12.784 }, 00:26:12.784 { 00:26:12.784 "name": "BaseBdev2", 00:26:12.784 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:12.784 "is_configured": true, 00:26:12.784 "data_offset": 0, 00:26:12.784 "data_size": 65536 00:26:12.784 }, 00:26:12.784 { 00:26:12.784 "name": "BaseBdev3", 00:26:12.784 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:12.784 "is_configured": true, 00:26:12.784 "data_offset": 0, 00:26:12.784 "data_size": 65536 00:26:12.784 }, 00:26:12.784 { 00:26:12.784 "name": "BaseBdev4", 00:26:12.784 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:12.784 "is_configured": true, 00:26:12.784 "data_offset": 0, 00:26:12.784 "data_size": 65536 00:26:12.784 } 00:26:12.784 ] 00:26:12.784 }' 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:12.784 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.042 [2024-11-08 17:13:49.624807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.042 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:13.043 "name": "Existed_Raid", 00:26:13.043 "aliases": [ 00:26:13.043 "17e0ad85-46f7-4896-a8e9-a337df714307" 00:26:13.043 ], 00:26:13.043 "product_name": "Raid Volume", 00:26:13.043 "block_size": 512, 00:26:13.043 "num_blocks": 196608, 00:26:13.043 "uuid": "17e0ad85-46f7-4896-a8e9-a337df714307", 00:26:13.043 "assigned_rate_limits": { 00:26:13.043 "rw_ios_per_sec": 0, 00:26:13.043 "rw_mbytes_per_sec": 0, 00:26:13.043 "r_mbytes_per_sec": 0, 00:26:13.043 "w_mbytes_per_sec": 0 00:26:13.043 }, 00:26:13.043 "claimed": false, 00:26:13.043 "zoned": false, 00:26:13.043 "supported_io_types": { 00:26:13.043 "read": true, 00:26:13.043 "write": true, 00:26:13.043 "unmap": false, 00:26:13.043 "flush": false, 00:26:13.043 "reset": true, 00:26:13.043 "nvme_admin": false, 00:26:13.043 "nvme_io": false, 00:26:13.043 "nvme_io_md": false, 00:26:13.043 "write_zeroes": true, 00:26:13.043 "zcopy": false, 00:26:13.043 "get_zone_info": false, 00:26:13.043 "zone_management": false, 00:26:13.043 "zone_append": false, 00:26:13.043 "compare": false, 00:26:13.043 "compare_and_write": false, 00:26:13.043 "abort": false, 00:26:13.043 "seek_hole": false, 00:26:13.043 "seek_data": false, 00:26:13.043 "copy": false, 00:26:13.043 "nvme_iov_md": false 00:26:13.043 }, 00:26:13.043 "driver_specific": { 00:26:13.043 "raid": { 00:26:13.043 "uuid": "17e0ad85-46f7-4896-a8e9-a337df714307", 00:26:13.043 "strip_size_kb": 64, 00:26:13.043 "state": "online", 00:26:13.043 "raid_level": "raid5f", 00:26:13.043 "superblock": false, 00:26:13.043 "num_base_bdevs": 4, 00:26:13.043 "num_base_bdevs_discovered": 4, 00:26:13.043 "num_base_bdevs_operational": 4, 00:26:13.043 "base_bdevs_list": [ 00:26:13.043 { 00:26:13.043 "name": "NewBaseBdev", 00:26:13.043 "uuid": "b166a958-0179-4141-a844-cd7431125df0", 00:26:13.043 "is_configured": true, 00:26:13.043 "data_offset": 0, 00:26:13.043 "data_size": 65536 00:26:13.043 }, 00:26:13.043 { 00:26:13.043 "name": "BaseBdev2", 00:26:13.043 "uuid": "8663c69f-7c6c-4785-89f8-f2162ae9f210", 00:26:13.043 "is_configured": true, 00:26:13.043 "data_offset": 0, 00:26:13.043 "data_size": 65536 00:26:13.043 }, 00:26:13.043 { 00:26:13.043 "name": "BaseBdev3", 00:26:13.043 "uuid": "aef802c3-e7f0-4ed9-9dc6-0230d3f08872", 00:26:13.043 "is_configured": true, 00:26:13.043 "data_offset": 0, 00:26:13.043 "data_size": 65536 00:26:13.043 }, 00:26:13.043 { 00:26:13.043 "name": "BaseBdev4", 00:26:13.043 "uuid": "7f2ca08a-5e37-4735-b6db-966c587fa845", 00:26:13.043 "is_configured": true, 00:26:13.043 "data_offset": 0, 00:26:13.043 "data_size": 65536 00:26:13.043 } 00:26:13.043 ] 00:26:13.043 } 00:26:13.043 } 00:26:13.043 }' 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:13.043 BaseBdev2 00:26:13.043 BaseBdev3 00:26:13.043 BaseBdev4' 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.043 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.301 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.302 [2024-11-08 17:13:49.884592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:13.302 [2024-11-08 17:13:49.884771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:13.302 [2024-11-08 17:13:49.884882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:13.302 [2024-11-08 17:13:49.885199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:13.302 [2024-11-08 17:13:49.885212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81128 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 81128 ']' 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 81128 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81128 00:26:13.302 killing process with pid 81128 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81128' 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 81128 00:26:13.302 17:13:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 81128 00:26:13.302 [2024-11-08 17:13:49.912960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:13.559 [2024-11-08 17:13:50.175920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:14.494 ************************************ 00:26:14.494 END TEST raid5f_state_function_test 00:26:14.494 ************************************ 00:26:14.494 17:13:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:14.494 00:26:14.494 real 0m8.821s 00:26:14.494 user 0m13.976s 00:26:14.494 sys 0m1.501s 00:26:14.494 17:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.494 17:13:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.494 17:13:50 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:26:14.494 17:13:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:26:14.494 17:13:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:14.494 17:13:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:14.494 ************************************ 00:26:14.494 START TEST raid5f_state_function_test_sb 00:26:14.494 ************************************ 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:14.494 Process raid pid: 81768 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81768 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81768' 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81768 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 81768 ']' 00:26:14.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.494 17:13:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:14.495 17:13:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.495 17:13:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:14.495 17:13:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:14.495 [2024-11-08 17:13:51.083880] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:26:14.495 [2024-11-08 17:13:51.084200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.752 [2024-11-08 17:13:51.243334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.752 [2024-11-08 17:13:51.366690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.017 [2024-11-08 17:13:51.520180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:15.017 [2024-11-08 17:13:51.520233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.583 [2024-11-08 17:13:52.027725] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:15.583 [2024-11-08 17:13:52.027796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:15.583 [2024-11-08 17:13:52.027807] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:15.583 [2024-11-08 17:13:52.027817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:15.583 [2024-11-08 17:13:52.027824] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:15.583 [2024-11-08 17:13:52.027833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:15.583 [2024-11-08 17:13:52.027839] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:15.583 [2024-11-08 17:13:52.027848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.583 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.583 "name": "Existed_Raid", 00:26:15.583 "uuid": "0876da24-1f0a-4ca3-8b79-e7dd7f72a8b6", 00:26:15.583 "strip_size_kb": 64, 00:26:15.583 "state": "configuring", 00:26:15.583 "raid_level": "raid5f", 00:26:15.583 "superblock": true, 00:26:15.583 "num_base_bdevs": 4, 00:26:15.583 "num_base_bdevs_discovered": 0, 00:26:15.583 "num_base_bdevs_operational": 4, 00:26:15.583 "base_bdevs_list": [ 00:26:15.583 { 00:26:15.583 "name": "BaseBdev1", 00:26:15.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.583 "is_configured": false, 00:26:15.583 "data_offset": 0, 00:26:15.583 "data_size": 0 00:26:15.583 }, 00:26:15.583 { 00:26:15.583 "name": "BaseBdev2", 00:26:15.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.583 "is_configured": false, 00:26:15.583 "data_offset": 0, 00:26:15.584 "data_size": 0 00:26:15.584 }, 00:26:15.584 { 00:26:15.584 "name": "BaseBdev3", 00:26:15.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.584 "is_configured": false, 00:26:15.584 "data_offset": 0, 00:26:15.584 "data_size": 0 00:26:15.584 }, 00:26:15.584 { 00:26:15.584 "name": "BaseBdev4", 00:26:15.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.584 "is_configured": false, 00:26:15.584 "data_offset": 0, 00:26:15.584 "data_size": 0 00:26:15.584 } 00:26:15.584 ] 00:26:15.584 }' 00:26:15.584 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.584 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 [2024-11-08 17:13:52.339728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:15.842 [2024-11-08 17:13:52.339784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 [2024-11-08 17:13:52.347742] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:15.842 [2024-11-08 17:13:52.347796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:15.842 [2024-11-08 17:13:52.347806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:15.842 [2024-11-08 17:13:52.347816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:15.842 [2024-11-08 17:13:52.347822] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:15.842 [2024-11-08 17:13:52.347831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:15.842 [2024-11-08 17:13:52.347837] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:15.842 [2024-11-08 17:13:52.347846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 [2024-11-08 17:13:52.383006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:15.842 BaseBdev1 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 [ 00:26:15.842 { 00:26:15.842 "name": "BaseBdev1", 00:26:15.842 "aliases": [ 00:26:15.842 "7134c357-3290-4911-8d3d-fabe6302c74c" 00:26:15.842 ], 00:26:15.842 "product_name": "Malloc disk", 00:26:15.842 "block_size": 512, 00:26:15.842 "num_blocks": 65536, 00:26:15.842 "uuid": "7134c357-3290-4911-8d3d-fabe6302c74c", 00:26:15.842 "assigned_rate_limits": { 00:26:15.842 "rw_ios_per_sec": 0, 00:26:15.842 "rw_mbytes_per_sec": 0, 00:26:15.842 "r_mbytes_per_sec": 0, 00:26:15.842 "w_mbytes_per_sec": 0 00:26:15.842 }, 00:26:15.842 "claimed": true, 00:26:15.842 "claim_type": "exclusive_write", 00:26:15.842 "zoned": false, 00:26:15.842 "supported_io_types": { 00:26:15.842 "read": true, 00:26:15.842 "write": true, 00:26:15.842 "unmap": true, 00:26:15.842 "flush": true, 00:26:15.842 "reset": true, 00:26:15.842 "nvme_admin": false, 00:26:15.842 "nvme_io": false, 00:26:15.842 "nvme_io_md": false, 00:26:15.842 "write_zeroes": true, 00:26:15.842 "zcopy": true, 00:26:15.842 "get_zone_info": false, 00:26:15.842 "zone_management": false, 00:26:15.842 "zone_append": false, 00:26:15.842 "compare": false, 00:26:15.842 "compare_and_write": false, 00:26:15.842 "abort": true, 00:26:15.842 "seek_hole": false, 00:26:15.842 "seek_data": false, 00:26:15.842 "copy": true, 00:26:15.842 "nvme_iov_md": false 00:26:15.842 }, 00:26:15.842 "memory_domains": [ 00:26:15.842 { 00:26:15.842 "dma_device_id": "system", 00:26:15.842 "dma_device_type": 1 00:26:15.842 }, 00:26:15.842 { 00:26:15.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.842 "dma_device_type": 2 00:26:15.842 } 00:26:15.842 ], 00:26:15.842 "driver_specific": {} 00:26:15.842 } 00:26:15.842 ] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.842 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.842 "name": "Existed_Raid", 00:26:15.842 "uuid": "acecf6ef-3485-4e18-b0a8-bc020aceab67", 00:26:15.842 "strip_size_kb": 64, 00:26:15.842 "state": "configuring", 00:26:15.842 "raid_level": "raid5f", 00:26:15.842 "superblock": true, 00:26:15.842 "num_base_bdevs": 4, 00:26:15.842 "num_base_bdevs_discovered": 1, 00:26:15.842 "num_base_bdevs_operational": 4, 00:26:15.842 "base_bdevs_list": [ 00:26:15.842 { 00:26:15.842 "name": "BaseBdev1", 00:26:15.842 "uuid": "7134c357-3290-4911-8d3d-fabe6302c74c", 00:26:15.842 "is_configured": true, 00:26:15.842 "data_offset": 2048, 00:26:15.842 "data_size": 63488 00:26:15.842 }, 00:26:15.842 { 00:26:15.843 "name": "BaseBdev2", 00:26:15.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.843 "is_configured": false, 00:26:15.843 "data_offset": 0, 00:26:15.843 "data_size": 0 00:26:15.843 }, 00:26:15.843 { 00:26:15.843 "name": "BaseBdev3", 00:26:15.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.843 "is_configured": false, 00:26:15.843 "data_offset": 0, 00:26:15.843 "data_size": 0 00:26:15.843 }, 00:26:15.843 { 00:26:15.843 "name": "BaseBdev4", 00:26:15.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.843 "is_configured": false, 00:26:15.843 "data_offset": 0, 00:26:15.843 "data_size": 0 00:26:15.843 } 00:26:15.843 ] 00:26:15.843 }' 00:26:15.843 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.843 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.101 [2024-11-08 17:13:52.715138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:16.101 [2024-11-08 17:13:52.715200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.101 [2024-11-08 17:13:52.723218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:16.101 [2024-11-08 17:13:52.725372] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:16.101 [2024-11-08 17:13:52.725492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:16.101 [2024-11-08 17:13:52.725550] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:16.101 [2024-11-08 17:13:52.725581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:16.101 [2024-11-08 17:13:52.725599] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:16.101 [2024-11-08 17:13:52.725619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.101 "name": "Existed_Raid", 00:26:16.101 "uuid": "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0", 00:26:16.101 "strip_size_kb": 64, 00:26:16.101 "state": "configuring", 00:26:16.101 "raid_level": "raid5f", 00:26:16.101 "superblock": true, 00:26:16.101 "num_base_bdevs": 4, 00:26:16.101 "num_base_bdevs_discovered": 1, 00:26:16.101 "num_base_bdevs_operational": 4, 00:26:16.101 "base_bdevs_list": [ 00:26:16.101 { 00:26:16.101 "name": "BaseBdev1", 00:26:16.101 "uuid": "7134c357-3290-4911-8d3d-fabe6302c74c", 00:26:16.101 "is_configured": true, 00:26:16.101 "data_offset": 2048, 00:26:16.101 "data_size": 63488 00:26:16.101 }, 00:26:16.101 { 00:26:16.101 "name": "BaseBdev2", 00:26:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.101 "is_configured": false, 00:26:16.101 "data_offset": 0, 00:26:16.101 "data_size": 0 00:26:16.101 }, 00:26:16.101 { 00:26:16.101 "name": "BaseBdev3", 00:26:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.101 "is_configured": false, 00:26:16.101 "data_offset": 0, 00:26:16.101 "data_size": 0 00:26:16.101 }, 00:26:16.101 { 00:26:16.101 "name": "BaseBdev4", 00:26:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.101 "is_configured": false, 00:26:16.101 "data_offset": 0, 00:26:16.101 "data_size": 0 00:26:16.101 } 00:26:16.101 ] 00:26:16.101 }' 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.101 17:13:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.359 [2024-11-08 17:13:53.060960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:16.359 BaseBdev2 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.359 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.617 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.617 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:16.617 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.617 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.617 [ 00:26:16.617 { 00:26:16.617 "name": "BaseBdev2", 00:26:16.617 "aliases": [ 00:26:16.617 "4138c26a-0dea-47f4-bead-106bd69222f7" 00:26:16.617 ], 00:26:16.617 "product_name": "Malloc disk", 00:26:16.617 "block_size": 512, 00:26:16.617 "num_blocks": 65536, 00:26:16.617 "uuid": "4138c26a-0dea-47f4-bead-106bd69222f7", 00:26:16.617 "assigned_rate_limits": { 00:26:16.617 "rw_ios_per_sec": 0, 00:26:16.617 "rw_mbytes_per_sec": 0, 00:26:16.617 "r_mbytes_per_sec": 0, 00:26:16.617 "w_mbytes_per_sec": 0 00:26:16.617 }, 00:26:16.617 "claimed": true, 00:26:16.617 "claim_type": "exclusive_write", 00:26:16.617 "zoned": false, 00:26:16.617 "supported_io_types": { 00:26:16.617 "read": true, 00:26:16.617 "write": true, 00:26:16.617 "unmap": true, 00:26:16.617 "flush": true, 00:26:16.617 "reset": true, 00:26:16.617 "nvme_admin": false, 00:26:16.617 "nvme_io": false, 00:26:16.617 "nvme_io_md": false, 00:26:16.617 "write_zeroes": true, 00:26:16.617 "zcopy": true, 00:26:16.617 "get_zone_info": false, 00:26:16.617 "zone_management": false, 00:26:16.617 "zone_append": false, 00:26:16.617 "compare": false, 00:26:16.617 "compare_and_write": false, 00:26:16.618 "abort": true, 00:26:16.618 "seek_hole": false, 00:26:16.618 "seek_data": false, 00:26:16.618 "copy": true, 00:26:16.618 "nvme_iov_md": false 00:26:16.618 }, 00:26:16.618 "memory_domains": [ 00:26:16.618 { 00:26:16.618 "dma_device_id": "system", 00:26:16.618 "dma_device_type": 1 00:26:16.618 }, 00:26:16.618 { 00:26:16.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.618 "dma_device_type": 2 00:26:16.618 } 00:26:16.618 ], 00:26:16.618 "driver_specific": {} 00:26:16.618 } 00:26:16.618 ] 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.618 "name": "Existed_Raid", 00:26:16.618 "uuid": "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0", 00:26:16.618 "strip_size_kb": 64, 00:26:16.618 "state": "configuring", 00:26:16.618 "raid_level": "raid5f", 00:26:16.618 "superblock": true, 00:26:16.618 "num_base_bdevs": 4, 00:26:16.618 "num_base_bdevs_discovered": 2, 00:26:16.618 "num_base_bdevs_operational": 4, 00:26:16.618 "base_bdevs_list": [ 00:26:16.618 { 00:26:16.618 "name": "BaseBdev1", 00:26:16.618 "uuid": "7134c357-3290-4911-8d3d-fabe6302c74c", 00:26:16.618 "is_configured": true, 00:26:16.618 "data_offset": 2048, 00:26:16.618 "data_size": 63488 00:26:16.618 }, 00:26:16.618 { 00:26:16.618 "name": "BaseBdev2", 00:26:16.618 "uuid": "4138c26a-0dea-47f4-bead-106bd69222f7", 00:26:16.618 "is_configured": true, 00:26:16.618 "data_offset": 2048, 00:26:16.618 "data_size": 63488 00:26:16.618 }, 00:26:16.618 { 00:26:16.618 "name": "BaseBdev3", 00:26:16.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.618 "is_configured": false, 00:26:16.618 "data_offset": 0, 00:26:16.618 "data_size": 0 00:26:16.618 }, 00:26:16.618 { 00:26:16.618 "name": "BaseBdev4", 00:26:16.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.618 "is_configured": false, 00:26:16.618 "data_offset": 0, 00:26:16.618 "data_size": 0 00:26:16.618 } 00:26:16.618 ] 00:26:16.618 }' 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.618 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.876 [2024-11-08 17:13:53.446054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:16.876 BaseBdev3 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.876 [ 00:26:16.876 { 00:26:16.876 "name": "BaseBdev3", 00:26:16.876 "aliases": [ 00:26:16.876 "4c57c616-b656-41ff-bc80-a98ad88c97fa" 00:26:16.876 ], 00:26:16.876 "product_name": "Malloc disk", 00:26:16.876 "block_size": 512, 00:26:16.876 "num_blocks": 65536, 00:26:16.876 "uuid": "4c57c616-b656-41ff-bc80-a98ad88c97fa", 00:26:16.876 "assigned_rate_limits": { 00:26:16.876 "rw_ios_per_sec": 0, 00:26:16.876 "rw_mbytes_per_sec": 0, 00:26:16.876 "r_mbytes_per_sec": 0, 00:26:16.876 "w_mbytes_per_sec": 0 00:26:16.876 }, 00:26:16.876 "claimed": true, 00:26:16.876 "claim_type": "exclusive_write", 00:26:16.876 "zoned": false, 00:26:16.876 "supported_io_types": { 00:26:16.876 "read": true, 00:26:16.876 "write": true, 00:26:16.876 "unmap": true, 00:26:16.876 "flush": true, 00:26:16.876 "reset": true, 00:26:16.876 "nvme_admin": false, 00:26:16.876 "nvme_io": false, 00:26:16.876 "nvme_io_md": false, 00:26:16.876 "write_zeroes": true, 00:26:16.876 "zcopy": true, 00:26:16.876 "get_zone_info": false, 00:26:16.876 "zone_management": false, 00:26:16.876 "zone_append": false, 00:26:16.876 "compare": false, 00:26:16.876 "compare_and_write": false, 00:26:16.876 "abort": true, 00:26:16.876 "seek_hole": false, 00:26:16.876 "seek_data": false, 00:26:16.876 "copy": true, 00:26:16.876 "nvme_iov_md": false 00:26:16.876 }, 00:26:16.876 "memory_domains": [ 00:26:16.876 { 00:26:16.876 "dma_device_id": "system", 00:26:16.876 "dma_device_type": 1 00:26:16.876 }, 00:26:16.876 { 00:26:16.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.876 "dma_device_type": 2 00:26:16.876 } 00:26:16.876 ], 00:26:16.876 "driver_specific": {} 00:26:16.876 } 00:26:16.876 ] 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:16.876 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.877 "name": "Existed_Raid", 00:26:16.877 "uuid": "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0", 00:26:16.877 "strip_size_kb": 64, 00:26:16.877 "state": "configuring", 00:26:16.877 "raid_level": "raid5f", 00:26:16.877 "superblock": true, 00:26:16.877 "num_base_bdevs": 4, 00:26:16.877 "num_base_bdevs_discovered": 3, 00:26:16.877 "num_base_bdevs_operational": 4, 00:26:16.877 "base_bdevs_list": [ 00:26:16.877 { 00:26:16.877 "name": "BaseBdev1", 00:26:16.877 "uuid": "7134c357-3290-4911-8d3d-fabe6302c74c", 00:26:16.877 "is_configured": true, 00:26:16.877 "data_offset": 2048, 00:26:16.877 "data_size": 63488 00:26:16.877 }, 00:26:16.877 { 00:26:16.877 "name": "BaseBdev2", 00:26:16.877 "uuid": "4138c26a-0dea-47f4-bead-106bd69222f7", 00:26:16.877 "is_configured": true, 00:26:16.877 "data_offset": 2048, 00:26:16.877 "data_size": 63488 00:26:16.877 }, 00:26:16.877 { 00:26:16.877 "name": "BaseBdev3", 00:26:16.877 "uuid": "4c57c616-b656-41ff-bc80-a98ad88c97fa", 00:26:16.877 "is_configured": true, 00:26:16.877 "data_offset": 2048, 00:26:16.877 "data_size": 63488 00:26:16.877 }, 00:26:16.877 { 00:26:16.877 "name": "BaseBdev4", 00:26:16.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.877 "is_configured": false, 00:26:16.877 "data_offset": 0, 00:26:16.877 "data_size": 0 00:26:16.877 } 00:26:16.877 ] 00:26:16.877 }' 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.877 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 [2024-11-08 17:13:53.815674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:17.136 [2024-11-08 17:13:53.816187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:17.136 [2024-11-08 17:13:53.816286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:17.136 [2024-11-08 17:13:53.816585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:17.136 BaseBdev4 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 [2024-11-08 17:13:53.821586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:17.136 [2024-11-08 17:13:53.821609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:17.136 [2024-11-08 17:13:53.821878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 [ 00:26:17.136 { 00:26:17.136 "name": "BaseBdev4", 00:26:17.136 "aliases": [ 00:26:17.136 "8f72120e-8a60-4597-8464-7507cac33cc9" 00:26:17.136 ], 00:26:17.136 "product_name": "Malloc disk", 00:26:17.136 "block_size": 512, 00:26:17.136 "num_blocks": 65536, 00:26:17.136 "uuid": "8f72120e-8a60-4597-8464-7507cac33cc9", 00:26:17.136 "assigned_rate_limits": { 00:26:17.136 "rw_ios_per_sec": 0, 00:26:17.136 "rw_mbytes_per_sec": 0, 00:26:17.136 "r_mbytes_per_sec": 0, 00:26:17.136 "w_mbytes_per_sec": 0 00:26:17.136 }, 00:26:17.136 "claimed": true, 00:26:17.136 "claim_type": "exclusive_write", 00:26:17.136 "zoned": false, 00:26:17.136 "supported_io_types": { 00:26:17.136 "read": true, 00:26:17.136 "write": true, 00:26:17.136 "unmap": true, 00:26:17.136 "flush": true, 00:26:17.136 "reset": true, 00:26:17.136 "nvme_admin": false, 00:26:17.136 "nvme_io": false, 00:26:17.136 "nvme_io_md": false, 00:26:17.136 "write_zeroes": true, 00:26:17.136 "zcopy": true, 00:26:17.136 "get_zone_info": false, 00:26:17.136 "zone_management": false, 00:26:17.136 "zone_append": false, 00:26:17.136 "compare": false, 00:26:17.136 "compare_and_write": false, 00:26:17.136 "abort": true, 00:26:17.136 "seek_hole": false, 00:26:17.136 "seek_data": false, 00:26:17.136 "copy": true, 00:26:17.136 "nvme_iov_md": false 00:26:17.136 }, 00:26:17.136 "memory_domains": [ 00:26:17.136 { 00:26:17.136 "dma_device_id": "system", 00:26:17.136 "dma_device_type": 1 00:26:17.136 }, 00:26:17.136 { 00:26:17.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.136 "dma_device_type": 2 00:26:17.136 } 00:26:17.136 ], 00:26:17.136 "driver_specific": {} 00:26:17.136 } 00:26:17.136 ] 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.136 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.394 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.394 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.394 "name": "Existed_Raid", 00:26:17.394 "uuid": "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0", 00:26:17.394 "strip_size_kb": 64, 00:26:17.394 "state": "online", 00:26:17.394 "raid_level": "raid5f", 00:26:17.394 "superblock": true, 00:26:17.394 "num_base_bdevs": 4, 00:26:17.394 "num_base_bdevs_discovered": 4, 00:26:17.394 "num_base_bdevs_operational": 4, 00:26:17.394 "base_bdevs_list": [ 00:26:17.394 { 00:26:17.394 "name": "BaseBdev1", 00:26:17.394 "uuid": "7134c357-3290-4911-8d3d-fabe6302c74c", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 }, 00:26:17.394 { 00:26:17.394 "name": "BaseBdev2", 00:26:17.394 "uuid": "4138c26a-0dea-47f4-bead-106bd69222f7", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 }, 00:26:17.394 { 00:26:17.394 "name": "BaseBdev3", 00:26:17.394 "uuid": "4c57c616-b656-41ff-bc80-a98ad88c97fa", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 }, 00:26:17.394 { 00:26:17.394 "name": "BaseBdev4", 00:26:17.394 "uuid": "8f72120e-8a60-4597-8464-7507cac33cc9", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 } 00:26:17.394 ] 00:26:17.394 }' 00:26:17.394 17:13:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.394 17:13:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.652 [2024-11-08 17:13:54.191918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:17.652 "name": "Existed_Raid", 00:26:17.652 "aliases": [ 00:26:17.652 "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0" 00:26:17.652 ], 00:26:17.652 "product_name": "Raid Volume", 00:26:17.652 "block_size": 512, 00:26:17.652 "num_blocks": 190464, 00:26:17.652 "uuid": "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0", 00:26:17.652 "assigned_rate_limits": { 00:26:17.652 "rw_ios_per_sec": 0, 00:26:17.652 "rw_mbytes_per_sec": 0, 00:26:17.652 "r_mbytes_per_sec": 0, 00:26:17.652 "w_mbytes_per_sec": 0 00:26:17.652 }, 00:26:17.652 "claimed": false, 00:26:17.652 "zoned": false, 00:26:17.652 "supported_io_types": { 00:26:17.652 "read": true, 00:26:17.652 "write": true, 00:26:17.652 "unmap": false, 00:26:17.652 "flush": false, 00:26:17.652 "reset": true, 00:26:17.652 "nvme_admin": false, 00:26:17.652 "nvme_io": false, 00:26:17.652 "nvme_io_md": false, 00:26:17.652 "write_zeroes": true, 00:26:17.652 "zcopy": false, 00:26:17.652 "get_zone_info": false, 00:26:17.652 "zone_management": false, 00:26:17.652 "zone_append": false, 00:26:17.652 "compare": false, 00:26:17.652 "compare_and_write": false, 00:26:17.652 "abort": false, 00:26:17.652 "seek_hole": false, 00:26:17.652 "seek_data": false, 00:26:17.652 "copy": false, 00:26:17.652 "nvme_iov_md": false 00:26:17.652 }, 00:26:17.652 "driver_specific": { 00:26:17.652 "raid": { 00:26:17.652 "uuid": "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0", 00:26:17.652 "strip_size_kb": 64, 00:26:17.652 "state": "online", 00:26:17.652 "raid_level": "raid5f", 00:26:17.652 "superblock": true, 00:26:17.652 "num_base_bdevs": 4, 00:26:17.652 "num_base_bdevs_discovered": 4, 00:26:17.652 "num_base_bdevs_operational": 4, 00:26:17.652 "base_bdevs_list": [ 00:26:17.652 { 00:26:17.652 "name": "BaseBdev1", 00:26:17.652 "uuid": "7134c357-3290-4911-8d3d-fabe6302c74c", 00:26:17.652 "is_configured": true, 00:26:17.652 "data_offset": 2048, 00:26:17.652 "data_size": 63488 00:26:17.652 }, 00:26:17.652 { 00:26:17.652 "name": "BaseBdev2", 00:26:17.652 "uuid": "4138c26a-0dea-47f4-bead-106bd69222f7", 00:26:17.652 "is_configured": true, 00:26:17.652 "data_offset": 2048, 00:26:17.652 "data_size": 63488 00:26:17.652 }, 00:26:17.652 { 00:26:17.652 "name": "BaseBdev3", 00:26:17.652 "uuid": "4c57c616-b656-41ff-bc80-a98ad88c97fa", 00:26:17.652 "is_configured": true, 00:26:17.652 "data_offset": 2048, 00:26:17.652 "data_size": 63488 00:26:17.652 }, 00:26:17.652 { 00:26:17.652 "name": "BaseBdev4", 00:26:17.652 "uuid": "8f72120e-8a60-4597-8464-7507cac33cc9", 00:26:17.652 "is_configured": true, 00:26:17.652 "data_offset": 2048, 00:26:17.652 "data_size": 63488 00:26:17.652 } 00:26:17.652 ] 00:26:17.652 } 00:26:17.652 } 00:26:17.652 }' 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:17.652 BaseBdev2 00:26:17.652 BaseBdev3 00:26:17.652 BaseBdev4' 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.652 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.653 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.911 [2024-11-08 17:13:54.419784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.911 "name": "Existed_Raid", 00:26:17.911 "uuid": "8fff511e-8cf6-4c0a-9fb1-c2bcd2094ae0", 00:26:17.911 "strip_size_kb": 64, 00:26:17.911 "state": "online", 00:26:17.911 "raid_level": "raid5f", 00:26:17.911 "superblock": true, 00:26:17.911 "num_base_bdevs": 4, 00:26:17.911 "num_base_bdevs_discovered": 3, 00:26:17.911 "num_base_bdevs_operational": 3, 00:26:17.911 "base_bdevs_list": [ 00:26:17.911 { 00:26:17.911 "name": null, 00:26:17.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.911 "is_configured": false, 00:26:17.911 "data_offset": 0, 00:26:17.911 "data_size": 63488 00:26:17.911 }, 00:26:17.911 { 00:26:17.911 "name": "BaseBdev2", 00:26:17.911 "uuid": "4138c26a-0dea-47f4-bead-106bd69222f7", 00:26:17.911 "is_configured": true, 00:26:17.911 "data_offset": 2048, 00:26:17.911 "data_size": 63488 00:26:17.911 }, 00:26:17.911 { 00:26:17.911 "name": "BaseBdev3", 00:26:17.911 "uuid": "4c57c616-b656-41ff-bc80-a98ad88c97fa", 00:26:17.911 "is_configured": true, 00:26:17.911 "data_offset": 2048, 00:26:17.911 "data_size": 63488 00:26:17.911 }, 00:26:17.911 { 00:26:17.911 "name": "BaseBdev4", 00:26:17.911 "uuid": "8f72120e-8a60-4597-8464-7507cac33cc9", 00:26:17.911 "is_configured": true, 00:26:17.911 "data_offset": 2048, 00:26:17.911 "data_size": 63488 00:26:17.911 } 00:26:17.911 ] 00:26:17.911 }' 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.911 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.170 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.170 [2024-11-08 17:13:54.847547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:18.170 [2024-11-08 17:13:54.847858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:18.428 [2024-11-08 17:13:54.910493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.428 17:13:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.428 [2024-11-08 17:13:54.950524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.428 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.429 [2024-11-08 17:13:55.053956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:18.429 [2024-11-08 17:13:55.054008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.429 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.687 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:18.687 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:18.687 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:18.687 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:18.687 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:18.687 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:18.687 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 BaseBdev2 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 [ 00:26:18.688 { 00:26:18.688 "name": "BaseBdev2", 00:26:18.688 "aliases": [ 00:26:18.688 "9d2bca77-58a3-498a-b13d-5c48a7f30374" 00:26:18.688 ], 00:26:18.688 "product_name": "Malloc disk", 00:26:18.688 "block_size": 512, 00:26:18.688 "num_blocks": 65536, 00:26:18.688 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:18.688 "assigned_rate_limits": { 00:26:18.688 "rw_ios_per_sec": 0, 00:26:18.688 "rw_mbytes_per_sec": 0, 00:26:18.688 "r_mbytes_per_sec": 0, 00:26:18.688 "w_mbytes_per_sec": 0 00:26:18.688 }, 00:26:18.688 "claimed": false, 00:26:18.688 "zoned": false, 00:26:18.688 "supported_io_types": { 00:26:18.688 "read": true, 00:26:18.688 "write": true, 00:26:18.688 "unmap": true, 00:26:18.688 "flush": true, 00:26:18.688 "reset": true, 00:26:18.688 "nvme_admin": false, 00:26:18.688 "nvme_io": false, 00:26:18.688 "nvme_io_md": false, 00:26:18.688 "write_zeroes": true, 00:26:18.688 "zcopy": true, 00:26:18.688 "get_zone_info": false, 00:26:18.688 "zone_management": false, 00:26:18.688 "zone_append": false, 00:26:18.688 "compare": false, 00:26:18.688 "compare_and_write": false, 00:26:18.688 "abort": true, 00:26:18.688 "seek_hole": false, 00:26:18.688 "seek_data": false, 00:26:18.688 "copy": true, 00:26:18.688 "nvme_iov_md": false 00:26:18.688 }, 00:26:18.688 "memory_domains": [ 00:26:18.688 { 00:26:18.688 "dma_device_id": "system", 00:26:18.688 "dma_device_type": 1 00:26:18.688 }, 00:26:18.688 { 00:26:18.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.688 "dma_device_type": 2 00:26:18.688 } 00:26:18.688 ], 00:26:18.688 "driver_specific": {} 00:26:18.688 } 00:26:18.688 ] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 BaseBdev3 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 [ 00:26:18.688 { 00:26:18.688 "name": "BaseBdev3", 00:26:18.688 "aliases": [ 00:26:18.688 "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367" 00:26:18.688 ], 00:26:18.688 "product_name": "Malloc disk", 00:26:18.688 "block_size": 512, 00:26:18.688 "num_blocks": 65536, 00:26:18.688 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:18.688 "assigned_rate_limits": { 00:26:18.688 "rw_ios_per_sec": 0, 00:26:18.688 "rw_mbytes_per_sec": 0, 00:26:18.688 "r_mbytes_per_sec": 0, 00:26:18.688 "w_mbytes_per_sec": 0 00:26:18.688 }, 00:26:18.688 "claimed": false, 00:26:18.688 "zoned": false, 00:26:18.688 "supported_io_types": { 00:26:18.688 "read": true, 00:26:18.688 "write": true, 00:26:18.688 "unmap": true, 00:26:18.688 "flush": true, 00:26:18.688 "reset": true, 00:26:18.688 "nvme_admin": false, 00:26:18.688 "nvme_io": false, 00:26:18.688 "nvme_io_md": false, 00:26:18.688 "write_zeroes": true, 00:26:18.688 "zcopy": true, 00:26:18.688 "get_zone_info": false, 00:26:18.688 "zone_management": false, 00:26:18.688 "zone_append": false, 00:26:18.688 "compare": false, 00:26:18.688 "compare_and_write": false, 00:26:18.688 "abort": true, 00:26:18.688 "seek_hole": false, 00:26:18.688 "seek_data": false, 00:26:18.688 "copy": true, 00:26:18.688 "nvme_iov_md": false 00:26:18.688 }, 00:26:18.688 "memory_domains": [ 00:26:18.688 { 00:26:18.688 "dma_device_id": "system", 00:26:18.688 "dma_device_type": 1 00:26:18.688 }, 00:26:18.688 { 00:26:18.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.688 "dma_device_type": 2 00:26:18.688 } 00:26:18.688 ], 00:26:18.688 "driver_specific": {} 00:26:18.688 } 00:26:18.688 ] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 BaseBdev4 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.688 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.688 [ 00:26:18.688 { 00:26:18.688 "name": "BaseBdev4", 00:26:18.688 "aliases": [ 00:26:18.688 "66037785-ce1a-47df-b70c-fa998bfd9dd7" 00:26:18.688 ], 00:26:18.688 "product_name": "Malloc disk", 00:26:18.688 "block_size": 512, 00:26:18.688 "num_blocks": 65536, 00:26:18.688 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:18.688 "assigned_rate_limits": { 00:26:18.689 "rw_ios_per_sec": 0, 00:26:18.689 "rw_mbytes_per_sec": 0, 00:26:18.689 "r_mbytes_per_sec": 0, 00:26:18.689 "w_mbytes_per_sec": 0 00:26:18.689 }, 00:26:18.689 "claimed": false, 00:26:18.689 "zoned": false, 00:26:18.689 "supported_io_types": { 00:26:18.689 "read": true, 00:26:18.689 "write": true, 00:26:18.689 "unmap": true, 00:26:18.689 "flush": true, 00:26:18.689 "reset": true, 00:26:18.689 "nvme_admin": false, 00:26:18.689 "nvme_io": false, 00:26:18.689 "nvme_io_md": false, 00:26:18.689 "write_zeroes": true, 00:26:18.689 "zcopy": true, 00:26:18.689 "get_zone_info": false, 00:26:18.689 "zone_management": false, 00:26:18.689 "zone_append": false, 00:26:18.689 "compare": false, 00:26:18.689 "compare_and_write": false, 00:26:18.689 "abort": true, 00:26:18.689 "seek_hole": false, 00:26:18.689 "seek_data": false, 00:26:18.689 "copy": true, 00:26:18.689 "nvme_iov_md": false 00:26:18.689 }, 00:26:18.689 "memory_domains": [ 00:26:18.689 { 00:26:18.689 "dma_device_id": "system", 00:26:18.689 "dma_device_type": 1 00:26:18.689 }, 00:26:18.689 { 00:26:18.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.689 "dma_device_type": 2 00:26:18.689 } 00:26:18.689 ], 00:26:18.689 "driver_specific": {} 00:26:18.689 } 00:26:18.689 ] 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.689 [2024-11-08 17:13:55.341967] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:18.689 [2024-11-08 17:13:55.342109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:18.689 [2024-11-08 17:13:55.342181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:18.689 [2024-11-08 17:13:55.344177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:18.689 [2024-11-08 17:13:55.344316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.689 "name": "Existed_Raid", 00:26:18.689 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:18.689 "strip_size_kb": 64, 00:26:18.689 "state": "configuring", 00:26:18.689 "raid_level": "raid5f", 00:26:18.689 "superblock": true, 00:26:18.689 "num_base_bdevs": 4, 00:26:18.689 "num_base_bdevs_discovered": 3, 00:26:18.689 "num_base_bdevs_operational": 4, 00:26:18.689 "base_bdevs_list": [ 00:26:18.689 { 00:26:18.689 "name": "BaseBdev1", 00:26:18.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.689 "is_configured": false, 00:26:18.689 "data_offset": 0, 00:26:18.689 "data_size": 0 00:26:18.689 }, 00:26:18.689 { 00:26:18.689 "name": "BaseBdev2", 00:26:18.689 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:18.689 "is_configured": true, 00:26:18.689 "data_offset": 2048, 00:26:18.689 "data_size": 63488 00:26:18.689 }, 00:26:18.689 { 00:26:18.689 "name": "BaseBdev3", 00:26:18.689 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:18.689 "is_configured": true, 00:26:18.689 "data_offset": 2048, 00:26:18.689 "data_size": 63488 00:26:18.689 }, 00:26:18.689 { 00:26:18.689 "name": "BaseBdev4", 00:26:18.689 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:18.689 "is_configured": true, 00:26:18.689 "data_offset": 2048, 00:26:18.689 "data_size": 63488 00:26:18.689 } 00:26:18.689 ] 00:26:18.689 }' 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.689 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.255 [2024-11-08 17:13:55.686055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.255 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.255 "name": "Existed_Raid", 00:26:19.255 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:19.255 "strip_size_kb": 64, 00:26:19.255 "state": "configuring", 00:26:19.255 "raid_level": "raid5f", 00:26:19.255 "superblock": true, 00:26:19.255 "num_base_bdevs": 4, 00:26:19.255 "num_base_bdevs_discovered": 2, 00:26:19.255 "num_base_bdevs_operational": 4, 00:26:19.256 "base_bdevs_list": [ 00:26:19.256 { 00:26:19.256 "name": "BaseBdev1", 00:26:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.256 "is_configured": false, 00:26:19.256 "data_offset": 0, 00:26:19.256 "data_size": 0 00:26:19.256 }, 00:26:19.256 { 00:26:19.256 "name": null, 00:26:19.256 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:19.256 "is_configured": false, 00:26:19.256 "data_offset": 0, 00:26:19.256 "data_size": 63488 00:26:19.256 }, 00:26:19.256 { 00:26:19.256 "name": "BaseBdev3", 00:26:19.256 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:19.256 "is_configured": true, 00:26:19.256 "data_offset": 2048, 00:26:19.256 "data_size": 63488 00:26:19.256 }, 00:26:19.256 { 00:26:19.256 "name": "BaseBdev4", 00:26:19.256 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:19.256 "is_configured": true, 00:26:19.256 "data_offset": 2048, 00:26:19.256 "data_size": 63488 00:26:19.256 } 00:26:19.256 ] 00:26:19.256 }' 00:26:19.256 17:13:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.256 17:13:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.515 [2024-11-08 17:13:56.071378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:19.515 BaseBdev1 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.515 [ 00:26:19.515 { 00:26:19.515 "name": "BaseBdev1", 00:26:19.515 "aliases": [ 00:26:19.515 "50c1ae0d-b417-485a-9df9-8103e8e5dedf" 00:26:19.515 ], 00:26:19.515 "product_name": "Malloc disk", 00:26:19.515 "block_size": 512, 00:26:19.515 "num_blocks": 65536, 00:26:19.515 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:19.515 "assigned_rate_limits": { 00:26:19.515 "rw_ios_per_sec": 0, 00:26:19.515 "rw_mbytes_per_sec": 0, 00:26:19.515 "r_mbytes_per_sec": 0, 00:26:19.515 "w_mbytes_per_sec": 0 00:26:19.515 }, 00:26:19.515 "claimed": true, 00:26:19.515 "claim_type": "exclusive_write", 00:26:19.515 "zoned": false, 00:26:19.515 "supported_io_types": { 00:26:19.515 "read": true, 00:26:19.515 "write": true, 00:26:19.515 "unmap": true, 00:26:19.515 "flush": true, 00:26:19.515 "reset": true, 00:26:19.515 "nvme_admin": false, 00:26:19.515 "nvme_io": false, 00:26:19.515 "nvme_io_md": false, 00:26:19.515 "write_zeroes": true, 00:26:19.515 "zcopy": true, 00:26:19.515 "get_zone_info": false, 00:26:19.515 "zone_management": false, 00:26:19.515 "zone_append": false, 00:26:19.515 "compare": false, 00:26:19.515 "compare_and_write": false, 00:26:19.515 "abort": true, 00:26:19.515 "seek_hole": false, 00:26:19.515 "seek_data": false, 00:26:19.515 "copy": true, 00:26:19.515 "nvme_iov_md": false 00:26:19.515 }, 00:26:19.515 "memory_domains": [ 00:26:19.515 { 00:26:19.515 "dma_device_id": "system", 00:26:19.515 "dma_device_type": 1 00:26:19.515 }, 00:26:19.515 { 00:26:19.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.515 "dma_device_type": 2 00:26:19.515 } 00:26:19.515 ], 00:26:19.515 "driver_specific": {} 00:26:19.515 } 00:26:19.515 ] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.515 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.515 "name": "Existed_Raid", 00:26:19.515 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:19.515 "strip_size_kb": 64, 00:26:19.515 "state": "configuring", 00:26:19.515 "raid_level": "raid5f", 00:26:19.515 "superblock": true, 00:26:19.515 "num_base_bdevs": 4, 00:26:19.515 "num_base_bdevs_discovered": 3, 00:26:19.515 "num_base_bdevs_operational": 4, 00:26:19.515 "base_bdevs_list": [ 00:26:19.515 { 00:26:19.515 "name": "BaseBdev1", 00:26:19.515 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:19.515 "is_configured": true, 00:26:19.515 "data_offset": 2048, 00:26:19.515 "data_size": 63488 00:26:19.515 }, 00:26:19.515 { 00:26:19.515 "name": null, 00:26:19.515 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:19.515 "is_configured": false, 00:26:19.515 "data_offset": 0, 00:26:19.515 "data_size": 63488 00:26:19.515 }, 00:26:19.515 { 00:26:19.515 "name": "BaseBdev3", 00:26:19.515 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:19.515 "is_configured": true, 00:26:19.515 "data_offset": 2048, 00:26:19.515 "data_size": 63488 00:26:19.515 }, 00:26:19.515 { 00:26:19.515 "name": "BaseBdev4", 00:26:19.515 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:19.515 "is_configured": true, 00:26:19.515 "data_offset": 2048, 00:26:19.515 "data_size": 63488 00:26:19.515 } 00:26:19.515 ] 00:26:19.515 }' 00:26:19.516 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.516 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.774 [2024-11-08 17:13:56.443534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.774 "name": "Existed_Raid", 00:26:19.774 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:19.774 "strip_size_kb": 64, 00:26:19.774 "state": "configuring", 00:26:19.774 "raid_level": "raid5f", 00:26:19.774 "superblock": true, 00:26:19.774 "num_base_bdevs": 4, 00:26:19.774 "num_base_bdevs_discovered": 2, 00:26:19.774 "num_base_bdevs_operational": 4, 00:26:19.774 "base_bdevs_list": [ 00:26:19.774 { 00:26:19.774 "name": "BaseBdev1", 00:26:19.774 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:19.774 "is_configured": true, 00:26:19.774 "data_offset": 2048, 00:26:19.774 "data_size": 63488 00:26:19.774 }, 00:26:19.774 { 00:26:19.774 "name": null, 00:26:19.774 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:19.774 "is_configured": false, 00:26:19.774 "data_offset": 0, 00:26:19.774 "data_size": 63488 00:26:19.774 }, 00:26:19.774 { 00:26:19.774 "name": null, 00:26:19.774 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:19.774 "is_configured": false, 00:26:19.774 "data_offset": 0, 00:26:19.774 "data_size": 63488 00:26:19.774 }, 00:26:19.774 { 00:26:19.774 "name": "BaseBdev4", 00:26:19.774 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:19.774 "is_configured": true, 00:26:19.774 "data_offset": 2048, 00:26:19.774 "data_size": 63488 00:26:19.774 } 00:26:19.774 ] 00:26:19.774 }' 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.774 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 [2024-11-08 17:13:56.815640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.340 "name": "Existed_Raid", 00:26:20.340 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:20.340 "strip_size_kb": 64, 00:26:20.340 "state": "configuring", 00:26:20.340 "raid_level": "raid5f", 00:26:20.340 "superblock": true, 00:26:20.340 "num_base_bdevs": 4, 00:26:20.340 "num_base_bdevs_discovered": 3, 00:26:20.340 "num_base_bdevs_operational": 4, 00:26:20.340 "base_bdevs_list": [ 00:26:20.340 { 00:26:20.340 "name": "BaseBdev1", 00:26:20.340 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:20.340 "is_configured": true, 00:26:20.340 "data_offset": 2048, 00:26:20.340 "data_size": 63488 00:26:20.340 }, 00:26:20.340 { 00:26:20.340 "name": null, 00:26:20.340 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:20.340 "is_configured": false, 00:26:20.340 "data_offset": 0, 00:26:20.340 "data_size": 63488 00:26:20.340 }, 00:26:20.340 { 00:26:20.340 "name": "BaseBdev3", 00:26:20.340 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:20.340 "is_configured": true, 00:26:20.340 "data_offset": 2048, 00:26:20.340 "data_size": 63488 00:26:20.340 }, 00:26:20.340 { 00:26:20.340 "name": "BaseBdev4", 00:26:20.340 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:20.340 "is_configured": true, 00:26:20.340 "data_offset": 2048, 00:26:20.340 "data_size": 63488 00:26:20.340 } 00:26:20.340 ] 00:26:20.340 }' 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.340 17:13:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.598 [2024-11-08 17:13:57.175775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.598 "name": "Existed_Raid", 00:26:20.598 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:20.598 "strip_size_kb": 64, 00:26:20.598 "state": "configuring", 00:26:20.598 "raid_level": "raid5f", 00:26:20.598 "superblock": true, 00:26:20.598 "num_base_bdevs": 4, 00:26:20.598 "num_base_bdevs_discovered": 2, 00:26:20.598 "num_base_bdevs_operational": 4, 00:26:20.598 "base_bdevs_list": [ 00:26:20.598 { 00:26:20.598 "name": null, 00:26:20.598 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:20.598 "is_configured": false, 00:26:20.598 "data_offset": 0, 00:26:20.598 "data_size": 63488 00:26:20.598 }, 00:26:20.598 { 00:26:20.598 "name": null, 00:26:20.598 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:20.598 "is_configured": false, 00:26:20.598 "data_offset": 0, 00:26:20.598 "data_size": 63488 00:26:20.598 }, 00:26:20.598 { 00:26:20.598 "name": "BaseBdev3", 00:26:20.598 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:20.598 "is_configured": true, 00:26:20.598 "data_offset": 2048, 00:26:20.598 "data_size": 63488 00:26:20.598 }, 00:26:20.598 { 00:26:20.598 "name": "BaseBdev4", 00:26:20.598 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:20.598 "is_configured": true, 00:26:20.598 "data_offset": 2048, 00:26:20.598 "data_size": 63488 00:26:20.598 } 00:26:20.598 ] 00:26:20.598 }' 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.598 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.164 [2024-11-08 17:13:57.619100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.164 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.164 "name": "Existed_Raid", 00:26:21.164 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:21.164 "strip_size_kb": 64, 00:26:21.164 "state": "configuring", 00:26:21.164 "raid_level": "raid5f", 00:26:21.164 "superblock": true, 00:26:21.164 "num_base_bdevs": 4, 00:26:21.164 "num_base_bdevs_discovered": 3, 00:26:21.164 "num_base_bdevs_operational": 4, 00:26:21.164 "base_bdevs_list": [ 00:26:21.164 { 00:26:21.164 "name": null, 00:26:21.164 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:21.164 "is_configured": false, 00:26:21.164 "data_offset": 0, 00:26:21.164 "data_size": 63488 00:26:21.164 }, 00:26:21.164 { 00:26:21.164 "name": "BaseBdev2", 00:26:21.164 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:21.164 "is_configured": true, 00:26:21.164 "data_offset": 2048, 00:26:21.164 "data_size": 63488 00:26:21.164 }, 00:26:21.164 { 00:26:21.164 "name": "BaseBdev3", 00:26:21.164 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:21.164 "is_configured": true, 00:26:21.164 "data_offset": 2048, 00:26:21.164 "data_size": 63488 00:26:21.164 }, 00:26:21.164 { 00:26:21.164 "name": "BaseBdev4", 00:26:21.164 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:21.164 "is_configured": true, 00:26:21.164 "data_offset": 2048, 00:26:21.164 "data_size": 63488 00:26:21.164 } 00:26:21.164 ] 00:26:21.165 }' 00:26:21.165 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.165 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.423 17:13:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50c1ae0d-b417-485a-9df9-8103e8e5dedf 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.423 [2024-11-08 17:13:58.044300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:21.423 [2024-11-08 17:13:58.044549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:21.423 [2024-11-08 17:13:58.044563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:21.423 NewBaseBdev 00:26:21.423 [2024-11-08 17:13:58.044844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.423 [2024-11-08 17:13:58.049737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:21.423 [2024-11-08 17:13:58.049775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:21.423 [2024-11-08 17:13:58.050033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.423 [ 00:26:21.423 { 00:26:21.423 "name": "NewBaseBdev", 00:26:21.423 "aliases": [ 00:26:21.423 "50c1ae0d-b417-485a-9df9-8103e8e5dedf" 00:26:21.423 ], 00:26:21.423 "product_name": "Malloc disk", 00:26:21.423 "block_size": 512, 00:26:21.423 "num_blocks": 65536, 00:26:21.423 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:21.423 "assigned_rate_limits": { 00:26:21.423 "rw_ios_per_sec": 0, 00:26:21.423 "rw_mbytes_per_sec": 0, 00:26:21.423 "r_mbytes_per_sec": 0, 00:26:21.423 "w_mbytes_per_sec": 0 00:26:21.423 }, 00:26:21.423 "claimed": true, 00:26:21.423 "claim_type": "exclusive_write", 00:26:21.423 "zoned": false, 00:26:21.423 "supported_io_types": { 00:26:21.423 "read": true, 00:26:21.423 "write": true, 00:26:21.423 "unmap": true, 00:26:21.423 "flush": true, 00:26:21.423 "reset": true, 00:26:21.423 "nvme_admin": false, 00:26:21.423 "nvme_io": false, 00:26:21.423 "nvme_io_md": false, 00:26:21.423 "write_zeroes": true, 00:26:21.423 "zcopy": true, 00:26:21.423 "get_zone_info": false, 00:26:21.423 "zone_management": false, 00:26:21.423 "zone_append": false, 00:26:21.423 "compare": false, 00:26:21.423 "compare_and_write": false, 00:26:21.423 "abort": true, 00:26:21.423 "seek_hole": false, 00:26:21.423 "seek_data": false, 00:26:21.423 "copy": true, 00:26:21.423 "nvme_iov_md": false 00:26:21.423 }, 00:26:21.423 "memory_domains": [ 00:26:21.423 { 00:26:21.423 "dma_device_id": "system", 00:26:21.423 "dma_device_type": 1 00:26:21.423 }, 00:26:21.423 { 00:26:21.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.423 "dma_device_type": 2 00:26:21.423 } 00:26:21.423 ], 00:26:21.423 "driver_specific": {} 00:26:21.423 } 00:26:21.423 ] 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.423 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.423 "name": "Existed_Raid", 00:26:21.423 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:21.423 "strip_size_kb": 64, 00:26:21.423 "state": "online", 00:26:21.423 "raid_level": "raid5f", 00:26:21.423 "superblock": true, 00:26:21.423 "num_base_bdevs": 4, 00:26:21.423 "num_base_bdevs_discovered": 4, 00:26:21.423 "num_base_bdevs_operational": 4, 00:26:21.423 "base_bdevs_list": [ 00:26:21.423 { 00:26:21.423 "name": "NewBaseBdev", 00:26:21.423 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:21.423 "is_configured": true, 00:26:21.423 "data_offset": 2048, 00:26:21.423 "data_size": 63488 00:26:21.423 }, 00:26:21.423 { 00:26:21.423 "name": "BaseBdev2", 00:26:21.423 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:21.423 "is_configured": true, 00:26:21.423 "data_offset": 2048, 00:26:21.423 "data_size": 63488 00:26:21.423 }, 00:26:21.423 { 00:26:21.423 "name": "BaseBdev3", 00:26:21.423 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:21.423 "is_configured": true, 00:26:21.423 "data_offset": 2048, 00:26:21.423 "data_size": 63488 00:26:21.423 }, 00:26:21.423 { 00:26:21.423 "name": "BaseBdev4", 00:26:21.423 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:21.424 "is_configured": true, 00:26:21.424 "data_offset": 2048, 00:26:21.424 "data_size": 63488 00:26:21.424 } 00:26:21.424 ] 00:26:21.424 }' 00:26:21.424 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.424 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.990 [2024-11-08 17:13:58.411969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.990 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:21.990 "name": "Existed_Raid", 00:26:21.990 "aliases": [ 00:26:21.990 "e5998bd2-5e53-4232-a50a-0fb70e468edb" 00:26:21.990 ], 00:26:21.990 "product_name": "Raid Volume", 00:26:21.990 "block_size": 512, 00:26:21.990 "num_blocks": 190464, 00:26:21.990 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:21.990 "assigned_rate_limits": { 00:26:21.990 "rw_ios_per_sec": 0, 00:26:21.990 "rw_mbytes_per_sec": 0, 00:26:21.990 "r_mbytes_per_sec": 0, 00:26:21.990 "w_mbytes_per_sec": 0 00:26:21.990 }, 00:26:21.990 "claimed": false, 00:26:21.990 "zoned": false, 00:26:21.990 "supported_io_types": { 00:26:21.990 "read": true, 00:26:21.990 "write": true, 00:26:21.990 "unmap": false, 00:26:21.990 "flush": false, 00:26:21.990 "reset": true, 00:26:21.990 "nvme_admin": false, 00:26:21.990 "nvme_io": false, 00:26:21.990 "nvme_io_md": false, 00:26:21.990 "write_zeroes": true, 00:26:21.990 "zcopy": false, 00:26:21.990 "get_zone_info": false, 00:26:21.990 "zone_management": false, 00:26:21.990 "zone_append": false, 00:26:21.990 "compare": false, 00:26:21.990 "compare_and_write": false, 00:26:21.990 "abort": false, 00:26:21.990 "seek_hole": false, 00:26:21.990 "seek_data": false, 00:26:21.990 "copy": false, 00:26:21.990 "nvme_iov_md": false 00:26:21.990 }, 00:26:21.990 "driver_specific": { 00:26:21.990 "raid": { 00:26:21.990 "uuid": "e5998bd2-5e53-4232-a50a-0fb70e468edb", 00:26:21.990 "strip_size_kb": 64, 00:26:21.990 "state": "online", 00:26:21.990 "raid_level": "raid5f", 00:26:21.990 "superblock": true, 00:26:21.990 "num_base_bdevs": 4, 00:26:21.990 "num_base_bdevs_discovered": 4, 00:26:21.990 "num_base_bdevs_operational": 4, 00:26:21.990 "base_bdevs_list": [ 00:26:21.990 { 00:26:21.990 "name": "NewBaseBdev", 00:26:21.990 "uuid": "50c1ae0d-b417-485a-9df9-8103e8e5dedf", 00:26:21.990 "is_configured": true, 00:26:21.990 "data_offset": 2048, 00:26:21.990 "data_size": 63488 00:26:21.990 }, 00:26:21.990 { 00:26:21.990 "name": "BaseBdev2", 00:26:21.990 "uuid": "9d2bca77-58a3-498a-b13d-5c48a7f30374", 00:26:21.990 "is_configured": true, 00:26:21.990 "data_offset": 2048, 00:26:21.990 "data_size": 63488 00:26:21.990 }, 00:26:21.991 { 00:26:21.991 "name": "BaseBdev3", 00:26:21.991 "uuid": "993f15b2-3bc9-46b2-a1eb-8f4bbb1d7367", 00:26:21.991 "is_configured": true, 00:26:21.991 "data_offset": 2048, 00:26:21.991 "data_size": 63488 00:26:21.991 }, 00:26:21.991 { 00:26:21.991 "name": "BaseBdev4", 00:26:21.991 "uuid": "66037785-ce1a-47df-b70c-fa998bfd9dd7", 00:26:21.991 "is_configured": true, 00:26:21.991 "data_offset": 2048, 00:26:21.991 "data_size": 63488 00:26:21.991 } 00:26:21.991 ] 00:26:21.991 } 00:26:21.991 } 00:26:21.991 }' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:21.991 BaseBdev2 00:26:21.991 BaseBdev3 00:26:21.991 BaseBdev4' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.991 [2024-11-08 17:13:58.627715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:21.991 [2024-11-08 17:13:58.627909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:21.991 [2024-11-08 17:13:58.628003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.991 [2024-11-08 17:13:58.628312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.991 [2024-11-08 17:13:58.628325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81768 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 81768 ']' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 81768 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81768 00:26:21.991 killing process with pid 81768 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81768' 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 81768 00:26:21.991 [2024-11-08 17:13:58.656697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:21.991 17:13:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 81768 00:26:22.249 [2024-11-08 17:13:58.918113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:23.182 ************************************ 00:26:23.182 END TEST raid5f_state_function_test_sb 00:26:23.182 ************************************ 00:26:23.182 17:13:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:23.182 00:26:23.182 real 0m8.663s 00:26:23.182 user 0m13.664s 00:26:23.182 sys 0m1.455s 00:26:23.182 17:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:23.182 17:13:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.182 17:13:59 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:26:23.182 17:13:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:23.182 17:13:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:23.182 17:13:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:23.182 ************************************ 00:26:23.182 START TEST raid5f_superblock_test 00:26:23.182 ************************************ 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82406 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82406 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 82406 ']' 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:23.182 17:13:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.182 [2024-11-08 17:13:59.798358] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:26:23.182 [2024-11-08 17:13:59.798577] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82406 ] 00:26:23.441 [2024-11-08 17:13:59.955277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.441 [2024-11-08 17:14:00.073445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.699 [2024-11-08 17:14:00.221666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:23.699 [2024-11-08 17:14:00.221736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.956 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 malloc1 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 [2024-11-08 17:14:00.695880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:24.216 [2024-11-08 17:14:00.695948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.216 [2024-11-08 17:14:00.695967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:24.216 [2024-11-08 17:14:00.695977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.216 [2024-11-08 17:14:00.698275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.216 [2024-11-08 17:14:00.698325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:24.216 pt1 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 malloc2 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 [2024-11-08 17:14:00.734089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:24.216 [2024-11-08 17:14:00.734137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.216 [2024-11-08 17:14:00.734158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:24.216 [2024-11-08 17:14:00.734168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.216 [2024-11-08 17:14:00.736405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.216 [2024-11-08 17:14:00.736440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:24.216 pt2 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 malloc3 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 [2024-11-08 17:14:00.783468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:24.216 [2024-11-08 17:14:00.783519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.216 [2024-11-08 17:14:00.783542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:24.216 [2024-11-08 17:14:00.783553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.216 [2024-11-08 17:14:00.785784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.216 [2024-11-08 17:14:00.785816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:24.216 pt3 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 malloc4 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 [2024-11-08 17:14:00.821767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:24.216 [2024-11-08 17:14:00.821814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.216 [2024-11-08 17:14:00.821836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:24.216 [2024-11-08 17:14:00.821846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.216 [2024-11-08 17:14:00.824104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.216 [2024-11-08 17:14:00.824138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:24.216 pt4 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.216 [2024-11-08 17:14:00.829806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:24.216 [2024-11-08 17:14:00.831736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:24.216 [2024-11-08 17:14:00.831938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:24.216 [2024-11-08 17:14:00.832011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:24.216 [2024-11-08 17:14:00.832206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:24.216 [2024-11-08 17:14:00.832222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:24.216 [2024-11-08 17:14:00.832480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:24.216 [2024-11-08 17:14:00.837487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:24.216 [2024-11-08 17:14:00.837510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:24.216 [2024-11-08 17:14:00.837685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.216 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.217 "name": "raid_bdev1", 00:26:24.217 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:24.217 "strip_size_kb": 64, 00:26:24.217 "state": "online", 00:26:24.217 "raid_level": "raid5f", 00:26:24.217 "superblock": true, 00:26:24.217 "num_base_bdevs": 4, 00:26:24.217 "num_base_bdevs_discovered": 4, 00:26:24.217 "num_base_bdevs_operational": 4, 00:26:24.217 "base_bdevs_list": [ 00:26:24.217 { 00:26:24.217 "name": "pt1", 00:26:24.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:24.217 "is_configured": true, 00:26:24.217 "data_offset": 2048, 00:26:24.217 "data_size": 63488 00:26:24.217 }, 00:26:24.217 { 00:26:24.217 "name": "pt2", 00:26:24.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:24.217 "is_configured": true, 00:26:24.217 "data_offset": 2048, 00:26:24.217 "data_size": 63488 00:26:24.217 }, 00:26:24.217 { 00:26:24.217 "name": "pt3", 00:26:24.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:24.217 "is_configured": true, 00:26:24.217 "data_offset": 2048, 00:26:24.217 "data_size": 63488 00:26:24.217 }, 00:26:24.217 { 00:26:24.217 "name": "pt4", 00:26:24.217 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:24.217 "is_configured": true, 00:26:24.217 "data_offset": 2048, 00:26:24.217 "data_size": 63488 00:26:24.217 } 00:26:24.217 ] 00:26:24.217 }' 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.217 17:14:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:24.489 [2024-11-08 17:14:01.163984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.489 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:24.489 "name": "raid_bdev1", 00:26:24.489 "aliases": [ 00:26:24.489 "108e1580-114f-4362-bc30-28c89f6b9f7b" 00:26:24.489 ], 00:26:24.489 "product_name": "Raid Volume", 00:26:24.489 "block_size": 512, 00:26:24.489 "num_blocks": 190464, 00:26:24.489 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:24.489 "assigned_rate_limits": { 00:26:24.489 "rw_ios_per_sec": 0, 00:26:24.489 "rw_mbytes_per_sec": 0, 00:26:24.489 "r_mbytes_per_sec": 0, 00:26:24.490 "w_mbytes_per_sec": 0 00:26:24.490 }, 00:26:24.490 "claimed": false, 00:26:24.490 "zoned": false, 00:26:24.490 "supported_io_types": { 00:26:24.490 "read": true, 00:26:24.490 "write": true, 00:26:24.490 "unmap": false, 00:26:24.490 "flush": false, 00:26:24.490 "reset": true, 00:26:24.490 "nvme_admin": false, 00:26:24.490 "nvme_io": false, 00:26:24.490 "nvme_io_md": false, 00:26:24.490 "write_zeroes": true, 00:26:24.490 "zcopy": false, 00:26:24.490 "get_zone_info": false, 00:26:24.490 "zone_management": false, 00:26:24.490 "zone_append": false, 00:26:24.490 "compare": false, 00:26:24.490 "compare_and_write": false, 00:26:24.490 "abort": false, 00:26:24.490 "seek_hole": false, 00:26:24.490 "seek_data": false, 00:26:24.490 "copy": false, 00:26:24.490 "nvme_iov_md": false 00:26:24.490 }, 00:26:24.490 "driver_specific": { 00:26:24.490 "raid": { 00:26:24.490 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:24.490 "strip_size_kb": 64, 00:26:24.490 "state": "online", 00:26:24.490 "raid_level": "raid5f", 00:26:24.490 "superblock": true, 00:26:24.490 "num_base_bdevs": 4, 00:26:24.490 "num_base_bdevs_discovered": 4, 00:26:24.490 "num_base_bdevs_operational": 4, 00:26:24.490 "base_bdevs_list": [ 00:26:24.490 { 00:26:24.490 "name": "pt1", 00:26:24.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:24.490 "is_configured": true, 00:26:24.490 "data_offset": 2048, 00:26:24.490 "data_size": 63488 00:26:24.490 }, 00:26:24.490 { 00:26:24.490 "name": "pt2", 00:26:24.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:24.490 "is_configured": true, 00:26:24.490 "data_offset": 2048, 00:26:24.490 "data_size": 63488 00:26:24.490 }, 00:26:24.490 { 00:26:24.490 "name": "pt3", 00:26:24.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:24.490 "is_configured": true, 00:26:24.490 "data_offset": 2048, 00:26:24.490 "data_size": 63488 00:26:24.490 }, 00:26:24.490 { 00:26:24.490 "name": "pt4", 00:26:24.490 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:24.490 "is_configured": true, 00:26:24.490 "data_offset": 2048, 00:26:24.490 "data_size": 63488 00:26:24.490 } 00:26:24.490 ] 00:26:24.490 } 00:26:24.490 } 00:26:24.490 }' 00:26:24.490 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:24.748 pt2 00:26:24.748 pt3 00:26:24.748 pt4' 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.748 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.749 [2024-11-08 17:14:01.423991] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=108e1580-114f-4362-bc30-28c89f6b9f7b 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 108e1580-114f-4362-bc30-28c89f6b9f7b ']' 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.749 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.749 [2024-11-08 17:14:01.459813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:24.749 [2024-11-08 17:14:01.459839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:24.749 [2024-11-08 17:14:01.459923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:24.749 [2024-11-08 17:14:01.460018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:24.749 [2024-11-08 17:14:01.460034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.007 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 [2024-11-08 17:14:01.571855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:25.008 [2024-11-08 17:14:01.573856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:25.008 [2024-11-08 17:14:01.573902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:25.008 [2024-11-08 17:14:01.573936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:25.008 [2024-11-08 17:14:01.573989] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:25.008 [2024-11-08 17:14:01.574040] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:25.008 [2024-11-08 17:14:01.574060] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:25.008 [2024-11-08 17:14:01.574079] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:25.008 [2024-11-08 17:14:01.574092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:25.008 [2024-11-08 17:14:01.574104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:25.008 request: 00:26:25.008 { 00:26:25.008 "name": "raid_bdev1", 00:26:25.008 "raid_level": "raid5f", 00:26:25.008 "base_bdevs": [ 00:26:25.008 "malloc1", 00:26:25.008 "malloc2", 00:26:25.008 "malloc3", 00:26:25.008 "malloc4" 00:26:25.008 ], 00:26:25.008 "strip_size_kb": 64, 00:26:25.008 "superblock": false, 00:26:25.008 "method": "bdev_raid_create", 00:26:25.008 "req_id": 1 00:26:25.008 } 00:26:25.008 Got JSON-RPC error response 00:26:25.008 response: 00:26:25.008 { 00:26:25.008 "code": -17, 00:26:25.008 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:25.008 } 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 [2024-11-08 17:14:01.615827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:25.008 [2024-11-08 17:14:01.615878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.008 [2024-11-08 17:14:01.615896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:25.008 [2024-11-08 17:14:01.615907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.008 [2024-11-08 17:14:01.618217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.008 [2024-11-08 17:14:01.618254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:25.008 [2024-11-08 17:14:01.618340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:25.008 [2024-11-08 17:14:01.618397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:25.008 pt1 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.008 "name": "raid_bdev1", 00:26:25.008 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:25.008 "strip_size_kb": 64, 00:26:25.008 "state": "configuring", 00:26:25.008 "raid_level": "raid5f", 00:26:25.008 "superblock": true, 00:26:25.008 "num_base_bdevs": 4, 00:26:25.008 "num_base_bdevs_discovered": 1, 00:26:25.008 "num_base_bdevs_operational": 4, 00:26:25.008 "base_bdevs_list": [ 00:26:25.008 { 00:26:25.008 "name": "pt1", 00:26:25.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.008 "is_configured": true, 00:26:25.008 "data_offset": 2048, 00:26:25.008 "data_size": 63488 00:26:25.008 }, 00:26:25.008 { 00:26:25.008 "name": null, 00:26:25.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.008 "is_configured": false, 00:26:25.008 "data_offset": 2048, 00:26:25.008 "data_size": 63488 00:26:25.008 }, 00:26:25.008 { 00:26:25.008 "name": null, 00:26:25.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:25.008 "is_configured": false, 00:26:25.008 "data_offset": 2048, 00:26:25.008 "data_size": 63488 00:26:25.008 }, 00:26:25.008 { 00:26:25.008 "name": null, 00:26:25.008 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:25.008 "is_configured": false, 00:26:25.008 "data_offset": 2048, 00:26:25.008 "data_size": 63488 00:26:25.008 } 00:26:25.008 ] 00:26:25.008 }' 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.008 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.267 [2024-11-08 17:14:01.939954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:25.267 [2024-11-08 17:14:01.940039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.267 [2024-11-08 17:14:01.940063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:25.267 [2024-11-08 17:14:01.940076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.267 [2024-11-08 17:14:01.940561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.267 [2024-11-08 17:14:01.940581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:25.267 [2024-11-08 17:14:01.940669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:25.267 [2024-11-08 17:14:01.940695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:25.267 pt2 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.267 [2024-11-08 17:14:01.947936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:25.267 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.268 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.525 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.525 "name": "raid_bdev1", 00:26:25.525 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:25.525 "strip_size_kb": 64, 00:26:25.525 "state": "configuring", 00:26:25.525 "raid_level": "raid5f", 00:26:25.525 "superblock": true, 00:26:25.525 "num_base_bdevs": 4, 00:26:25.525 "num_base_bdevs_discovered": 1, 00:26:25.525 "num_base_bdevs_operational": 4, 00:26:25.525 "base_bdevs_list": [ 00:26:25.525 { 00:26:25.525 "name": "pt1", 00:26:25.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.525 "is_configured": true, 00:26:25.525 "data_offset": 2048, 00:26:25.525 "data_size": 63488 00:26:25.525 }, 00:26:25.525 { 00:26:25.525 "name": null, 00:26:25.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.525 "is_configured": false, 00:26:25.525 "data_offset": 0, 00:26:25.525 "data_size": 63488 00:26:25.525 }, 00:26:25.525 { 00:26:25.525 "name": null, 00:26:25.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:25.525 "is_configured": false, 00:26:25.525 "data_offset": 2048, 00:26:25.525 "data_size": 63488 00:26:25.525 }, 00:26:25.525 { 00:26:25.525 "name": null, 00:26:25.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:25.525 "is_configured": false, 00:26:25.525 "data_offset": 2048, 00:26:25.525 "data_size": 63488 00:26:25.525 } 00:26:25.525 ] 00:26:25.525 }' 00:26:25.525 17:14:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.525 17:14:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 [2024-11-08 17:14:02.288072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:25.784 [2024-11-08 17:14:02.288147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.784 [2024-11-08 17:14:02.288173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:25.784 [2024-11-08 17:14:02.288185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.784 [2024-11-08 17:14:02.288668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.784 [2024-11-08 17:14:02.288684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:25.784 [2024-11-08 17:14:02.288795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:25.784 [2024-11-08 17:14:02.288824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:25.784 pt2 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 [2024-11-08 17:14:02.296009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:25.784 [2024-11-08 17:14:02.296054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.784 [2024-11-08 17:14:02.296072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:25.784 [2024-11-08 17:14:02.296080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.784 [2024-11-08 17:14:02.296453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.784 [2024-11-08 17:14:02.296473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:25.784 [2024-11-08 17:14:02.296533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:25.784 [2024-11-08 17:14:02.296551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:25.784 pt3 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 [2024-11-08 17:14:02.303989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:25.784 [2024-11-08 17:14:02.304033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.784 [2024-11-08 17:14:02.304051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:25.784 [2024-11-08 17:14:02.304061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.784 [2024-11-08 17:14:02.304423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.784 [2024-11-08 17:14:02.304437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:25.784 [2024-11-08 17:14:02.304490] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:25.784 [2024-11-08 17:14:02.304506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:25.784 [2024-11-08 17:14:02.304636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:25.784 [2024-11-08 17:14:02.304646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:25.784 [2024-11-08 17:14:02.304908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:25.784 [2024-11-08 17:14:02.309638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:25.784 [2024-11-08 17:14:02.309660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:25.784 [2024-11-08 17:14:02.309836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.784 pt4 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.784 "name": "raid_bdev1", 00:26:25.784 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:25.784 "strip_size_kb": 64, 00:26:25.784 "state": "online", 00:26:25.784 "raid_level": "raid5f", 00:26:25.784 "superblock": true, 00:26:25.784 "num_base_bdevs": 4, 00:26:25.784 "num_base_bdevs_discovered": 4, 00:26:25.784 "num_base_bdevs_operational": 4, 00:26:25.784 "base_bdevs_list": [ 00:26:25.784 { 00:26:25.784 "name": "pt1", 00:26:25.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.784 "is_configured": true, 00:26:25.784 "data_offset": 2048, 00:26:25.784 "data_size": 63488 00:26:25.784 }, 00:26:25.784 { 00:26:25.784 "name": "pt2", 00:26:25.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.784 "is_configured": true, 00:26:25.784 "data_offset": 2048, 00:26:25.784 "data_size": 63488 00:26:25.784 }, 00:26:25.784 { 00:26:25.784 "name": "pt3", 00:26:25.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:25.784 "is_configured": true, 00:26:25.784 "data_offset": 2048, 00:26:25.784 "data_size": 63488 00:26:25.784 }, 00:26:25.784 { 00:26:25.784 "name": "pt4", 00:26:25.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:25.784 "is_configured": true, 00:26:25.784 "data_offset": 2048, 00:26:25.784 "data_size": 63488 00:26:25.784 } 00:26:25.784 ] 00:26:25.784 }' 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.784 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.071 [2024-11-08 17:14:02.659804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:26.071 "name": "raid_bdev1", 00:26:26.071 "aliases": [ 00:26:26.071 "108e1580-114f-4362-bc30-28c89f6b9f7b" 00:26:26.071 ], 00:26:26.071 "product_name": "Raid Volume", 00:26:26.071 "block_size": 512, 00:26:26.071 "num_blocks": 190464, 00:26:26.071 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:26.071 "assigned_rate_limits": { 00:26:26.071 "rw_ios_per_sec": 0, 00:26:26.071 "rw_mbytes_per_sec": 0, 00:26:26.071 "r_mbytes_per_sec": 0, 00:26:26.071 "w_mbytes_per_sec": 0 00:26:26.071 }, 00:26:26.071 "claimed": false, 00:26:26.071 "zoned": false, 00:26:26.071 "supported_io_types": { 00:26:26.071 "read": true, 00:26:26.071 "write": true, 00:26:26.071 "unmap": false, 00:26:26.071 "flush": false, 00:26:26.071 "reset": true, 00:26:26.071 "nvme_admin": false, 00:26:26.071 "nvme_io": false, 00:26:26.071 "nvme_io_md": false, 00:26:26.071 "write_zeroes": true, 00:26:26.071 "zcopy": false, 00:26:26.071 "get_zone_info": false, 00:26:26.071 "zone_management": false, 00:26:26.071 "zone_append": false, 00:26:26.071 "compare": false, 00:26:26.071 "compare_and_write": false, 00:26:26.071 "abort": false, 00:26:26.071 "seek_hole": false, 00:26:26.071 "seek_data": false, 00:26:26.071 "copy": false, 00:26:26.071 "nvme_iov_md": false 00:26:26.071 }, 00:26:26.071 "driver_specific": { 00:26:26.071 "raid": { 00:26:26.071 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:26.071 "strip_size_kb": 64, 00:26:26.071 "state": "online", 00:26:26.071 "raid_level": "raid5f", 00:26:26.071 "superblock": true, 00:26:26.071 "num_base_bdevs": 4, 00:26:26.071 "num_base_bdevs_discovered": 4, 00:26:26.071 "num_base_bdevs_operational": 4, 00:26:26.071 "base_bdevs_list": [ 00:26:26.071 { 00:26:26.071 "name": "pt1", 00:26:26.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:26.071 "is_configured": true, 00:26:26.071 "data_offset": 2048, 00:26:26.071 "data_size": 63488 00:26:26.071 }, 00:26:26.071 { 00:26:26.071 "name": "pt2", 00:26:26.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:26.071 "is_configured": true, 00:26:26.071 "data_offset": 2048, 00:26:26.071 "data_size": 63488 00:26:26.071 }, 00:26:26.071 { 00:26:26.071 "name": "pt3", 00:26:26.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:26.071 "is_configured": true, 00:26:26.071 "data_offset": 2048, 00:26:26.071 "data_size": 63488 00:26:26.071 }, 00:26:26.071 { 00:26:26.071 "name": "pt4", 00:26:26.071 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:26.071 "is_configured": true, 00:26:26.071 "data_offset": 2048, 00:26:26.071 "data_size": 63488 00:26:26.071 } 00:26:26.071 ] 00:26:26.071 } 00:26:26.071 } 00:26:26.071 }' 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:26.071 pt2 00:26:26.071 pt3 00:26:26.071 pt4' 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.071 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.330 [2024-11-08 17:14:02.891905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 108e1580-114f-4362-bc30-28c89f6b9f7b '!=' 108e1580-114f-4362-bc30-28c89f6b9f7b ']' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.330 [2024-11-08 17:14:02.927627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.330 "name": "raid_bdev1", 00:26:26.330 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:26.330 "strip_size_kb": 64, 00:26:26.330 "state": "online", 00:26:26.330 "raid_level": "raid5f", 00:26:26.330 "superblock": true, 00:26:26.330 "num_base_bdevs": 4, 00:26:26.330 "num_base_bdevs_discovered": 3, 00:26:26.330 "num_base_bdevs_operational": 3, 00:26:26.330 "base_bdevs_list": [ 00:26:26.330 { 00:26:26.330 "name": null, 00:26:26.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.330 "is_configured": false, 00:26:26.330 "data_offset": 0, 00:26:26.330 "data_size": 63488 00:26:26.330 }, 00:26:26.330 { 00:26:26.330 "name": "pt2", 00:26:26.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:26.330 "is_configured": true, 00:26:26.330 "data_offset": 2048, 00:26:26.330 "data_size": 63488 00:26:26.330 }, 00:26:26.330 { 00:26:26.330 "name": "pt3", 00:26:26.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:26.330 "is_configured": true, 00:26:26.330 "data_offset": 2048, 00:26:26.330 "data_size": 63488 00:26:26.330 }, 00:26:26.330 { 00:26:26.330 "name": "pt4", 00:26:26.330 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:26.330 "is_configured": true, 00:26:26.330 "data_offset": 2048, 00:26:26.330 "data_size": 63488 00:26:26.330 } 00:26:26.330 ] 00:26:26.330 }' 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.330 17:14:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.588 [2024-11-08 17:14:03.267700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:26.588 [2024-11-08 17:14:03.267851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:26.588 [2024-11-08 17:14:03.267999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:26.588 [2024-11-08 17:14:03.268113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:26.588 [2024-11-08 17:14:03.268155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.588 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.846 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.847 [2024-11-08 17:14:03.343684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:26.847 [2024-11-08 17:14:03.343740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.847 [2024-11-08 17:14:03.343777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:26.847 [2024-11-08 17:14:03.343789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.847 [2024-11-08 17:14:03.346188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.847 [2024-11-08 17:14:03.346323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:26.847 [2024-11-08 17:14:03.346421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:26.847 [2024-11-08 17:14:03.346471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:26.847 pt2 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.847 "name": "raid_bdev1", 00:26:26.847 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:26.847 "strip_size_kb": 64, 00:26:26.847 "state": "configuring", 00:26:26.847 "raid_level": "raid5f", 00:26:26.847 "superblock": true, 00:26:26.847 "num_base_bdevs": 4, 00:26:26.847 "num_base_bdevs_discovered": 1, 00:26:26.847 "num_base_bdevs_operational": 3, 00:26:26.847 "base_bdevs_list": [ 00:26:26.847 { 00:26:26.847 "name": null, 00:26:26.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.847 "is_configured": false, 00:26:26.847 "data_offset": 2048, 00:26:26.847 "data_size": 63488 00:26:26.847 }, 00:26:26.847 { 00:26:26.847 "name": "pt2", 00:26:26.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:26.847 "is_configured": true, 00:26:26.847 "data_offset": 2048, 00:26:26.847 "data_size": 63488 00:26:26.847 }, 00:26:26.847 { 00:26:26.847 "name": null, 00:26:26.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:26.847 "is_configured": false, 00:26:26.847 "data_offset": 2048, 00:26:26.847 "data_size": 63488 00:26:26.847 }, 00:26:26.847 { 00:26:26.847 "name": null, 00:26:26.847 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:26.847 "is_configured": false, 00:26:26.847 "data_offset": 2048, 00:26:26.847 "data_size": 63488 00:26:26.847 } 00:26:26.847 ] 00:26:26.847 }' 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.847 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.107 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:27.107 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.108 [2024-11-08 17:14:03.651816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:27.108 [2024-11-08 17:14:03.651875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.108 [2024-11-08 17:14:03.651897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:27.108 [2024-11-08 17:14:03.651907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.108 [2024-11-08 17:14:03.652358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.108 [2024-11-08 17:14:03.652374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:27.108 [2024-11-08 17:14:03.652457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:27.108 [2024-11-08 17:14:03.652483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:27.108 pt3 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.108 "name": "raid_bdev1", 00:26:27.108 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:27.108 "strip_size_kb": 64, 00:26:27.108 "state": "configuring", 00:26:27.108 "raid_level": "raid5f", 00:26:27.108 "superblock": true, 00:26:27.108 "num_base_bdevs": 4, 00:26:27.108 "num_base_bdevs_discovered": 2, 00:26:27.108 "num_base_bdevs_operational": 3, 00:26:27.108 "base_bdevs_list": [ 00:26:27.108 { 00:26:27.108 "name": null, 00:26:27.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.108 "is_configured": false, 00:26:27.108 "data_offset": 2048, 00:26:27.108 "data_size": 63488 00:26:27.108 }, 00:26:27.108 { 00:26:27.108 "name": "pt2", 00:26:27.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:27.108 "is_configured": true, 00:26:27.108 "data_offset": 2048, 00:26:27.108 "data_size": 63488 00:26:27.108 }, 00:26:27.108 { 00:26:27.108 "name": "pt3", 00:26:27.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:27.108 "is_configured": true, 00:26:27.108 "data_offset": 2048, 00:26:27.108 "data_size": 63488 00:26:27.108 }, 00:26:27.108 { 00:26:27.108 "name": null, 00:26:27.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:27.108 "is_configured": false, 00:26:27.108 "data_offset": 2048, 00:26:27.108 "data_size": 63488 00:26:27.108 } 00:26:27.108 ] 00:26:27.108 }' 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.108 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.383 [2024-11-08 17:14:03.971916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:27.383 [2024-11-08 17:14:03.971984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.383 [2024-11-08 17:14:03.972009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:27.383 [2024-11-08 17:14:03.972019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.383 [2024-11-08 17:14:03.972482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.383 [2024-11-08 17:14:03.972502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:27.383 [2024-11-08 17:14:03.972587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:27.383 [2024-11-08 17:14:03.972610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:27.383 [2024-11-08 17:14:03.972748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:27.383 [2024-11-08 17:14:03.972783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:27.383 [2024-11-08 17:14:03.973052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:27.383 [2024-11-08 17:14:03.978035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:27.383 [2024-11-08 17:14:03.978058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:27.383 [2024-11-08 17:14:03.978362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.383 pt4 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.383 17:14:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.383 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.383 "name": "raid_bdev1", 00:26:27.383 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:27.383 "strip_size_kb": 64, 00:26:27.383 "state": "online", 00:26:27.383 "raid_level": "raid5f", 00:26:27.383 "superblock": true, 00:26:27.383 "num_base_bdevs": 4, 00:26:27.383 "num_base_bdevs_discovered": 3, 00:26:27.383 "num_base_bdevs_operational": 3, 00:26:27.383 "base_bdevs_list": [ 00:26:27.383 { 00:26:27.383 "name": null, 00:26:27.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.383 "is_configured": false, 00:26:27.383 "data_offset": 2048, 00:26:27.383 "data_size": 63488 00:26:27.383 }, 00:26:27.383 { 00:26:27.383 "name": "pt2", 00:26:27.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:27.383 "is_configured": true, 00:26:27.383 "data_offset": 2048, 00:26:27.383 "data_size": 63488 00:26:27.383 }, 00:26:27.383 { 00:26:27.383 "name": "pt3", 00:26:27.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:27.383 "is_configured": true, 00:26:27.383 "data_offset": 2048, 00:26:27.383 "data_size": 63488 00:26:27.383 }, 00:26:27.383 { 00:26:27.383 "name": "pt4", 00:26:27.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:27.383 "is_configured": true, 00:26:27.383 "data_offset": 2048, 00:26:27.383 "data_size": 63488 00:26:27.383 } 00:26:27.383 ] 00:26:27.383 }' 00:26:27.383 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.383 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.650 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:27.651 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.651 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.651 [2024-11-08 17:14:04.352294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:27.651 [2024-11-08 17:14:04.352424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:27.651 [2024-11-08 17:14:04.352524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:27.651 [2024-11-08 17:14:04.352608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:27.651 [2024-11-08 17:14:04.352622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:27.651 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.651 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:27.651 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.651 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.651 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 [2024-11-08 17:14:04.404276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:27.909 [2024-11-08 17:14:04.404336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.909 [2024-11-08 17:14:04.404361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:27.909 [2024-11-08 17:14:04.404373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.909 [2024-11-08 17:14:04.406744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.909 [2024-11-08 17:14:04.406786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:27.909 [2024-11-08 17:14:04.406873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:27.909 [2024-11-08 17:14:04.406926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:27.909 [2024-11-08 17:14:04.407051] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:27.909 [2024-11-08 17:14:04.407064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:27.909 [2024-11-08 17:14:04.407079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:27.909 [2024-11-08 17:14:04.407132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:27.909 [2024-11-08 17:14:04.407253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:27.909 pt1 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.909 "name": "raid_bdev1", 00:26:27.909 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:27.909 "strip_size_kb": 64, 00:26:27.909 "state": "configuring", 00:26:27.909 "raid_level": "raid5f", 00:26:27.909 "superblock": true, 00:26:27.909 "num_base_bdevs": 4, 00:26:27.909 "num_base_bdevs_discovered": 2, 00:26:27.909 "num_base_bdevs_operational": 3, 00:26:27.909 "base_bdevs_list": [ 00:26:27.909 { 00:26:27.909 "name": null, 00:26:27.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.909 "is_configured": false, 00:26:27.909 "data_offset": 2048, 00:26:27.909 "data_size": 63488 00:26:27.909 }, 00:26:27.909 { 00:26:27.909 "name": "pt2", 00:26:27.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:27.909 "is_configured": true, 00:26:27.909 "data_offset": 2048, 00:26:27.909 "data_size": 63488 00:26:27.909 }, 00:26:27.909 { 00:26:27.909 "name": "pt3", 00:26:27.909 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:27.909 "is_configured": true, 00:26:27.909 "data_offset": 2048, 00:26:27.909 "data_size": 63488 00:26:27.909 }, 00:26:27.909 { 00:26:27.909 "name": null, 00:26:27.909 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:27.909 "is_configured": false, 00:26:27.909 "data_offset": 2048, 00:26:27.909 "data_size": 63488 00:26:27.909 } 00:26:27.909 ] 00:26:27.909 }' 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.909 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.168 [2024-11-08 17:14:04.748399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:28.168 [2024-11-08 17:14:04.748471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.168 [2024-11-08 17:14:04.748501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:28.168 [2024-11-08 17:14:04.748512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.168 [2024-11-08 17:14:04.749006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.168 [2024-11-08 17:14:04.749481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:28.168 [2024-11-08 17:14:04.749611] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:28.168 [2024-11-08 17:14:04.749645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:28.168 [2024-11-08 17:14:04.749825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:28.168 [2024-11-08 17:14:04.749837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:28.168 [2024-11-08 17:14:04.750106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:28.168 [2024-11-08 17:14:04.754941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:28.168 [2024-11-08 17:14:04.754962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:28.168 [2024-11-08 17:14:04.755227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.168 pt4 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.168 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.168 "name": "raid_bdev1", 00:26:28.168 "uuid": "108e1580-114f-4362-bc30-28c89f6b9f7b", 00:26:28.168 "strip_size_kb": 64, 00:26:28.168 "state": "online", 00:26:28.168 "raid_level": "raid5f", 00:26:28.168 "superblock": true, 00:26:28.168 "num_base_bdevs": 4, 00:26:28.168 "num_base_bdevs_discovered": 3, 00:26:28.168 "num_base_bdevs_operational": 3, 00:26:28.168 "base_bdevs_list": [ 00:26:28.168 { 00:26:28.169 "name": null, 00:26:28.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.169 "is_configured": false, 00:26:28.169 "data_offset": 2048, 00:26:28.169 "data_size": 63488 00:26:28.169 }, 00:26:28.169 { 00:26:28.169 "name": "pt2", 00:26:28.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:28.169 "is_configured": true, 00:26:28.169 "data_offset": 2048, 00:26:28.169 "data_size": 63488 00:26:28.169 }, 00:26:28.169 { 00:26:28.169 "name": "pt3", 00:26:28.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:28.169 "is_configured": true, 00:26:28.169 "data_offset": 2048, 00:26:28.169 "data_size": 63488 00:26:28.169 }, 00:26:28.169 { 00:26:28.169 "name": "pt4", 00:26:28.169 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:28.169 "is_configured": true, 00:26:28.169 "data_offset": 2048, 00:26:28.169 "data_size": 63488 00:26:28.169 } 00:26:28.169 ] 00:26:28.169 }' 00:26:28.169 17:14:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.169 17:14:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.426 [2024-11-08 17:14:05.097436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 108e1580-114f-4362-bc30-28c89f6b9f7b '!=' 108e1580-114f-4362-bc30-28c89f6b9f7b ']' 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82406 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 82406 ']' 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 82406 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.426 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82406 00:26:28.684 killing process with pid 82406 00:26:28.685 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:28.685 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:28.685 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82406' 00:26:28.685 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 82406 00:26:28.685 [2024-11-08 17:14:05.149015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:28.685 17:14:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 82406 00:26:28.685 [2024-11-08 17:14:05.149145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:28.685 [2024-11-08 17:14:05.149283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:28.685 [2024-11-08 17:14:05.149300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:28.942 [2024-11-08 17:14:05.455572] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:29.875 17:14:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:29.875 00:26:29.875 real 0m6.567s 00:26:29.875 user 0m10.179s 00:26:29.875 sys 0m1.093s 00:26:29.875 ************************************ 00:26:29.875 END TEST raid5f_superblock_test 00:26:29.875 ************************************ 00:26:29.875 17:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:29.875 17:14:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.875 17:14:06 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:26:29.875 17:14:06 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:26:29.875 17:14:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:26:29.875 17:14:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:29.875 17:14:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:29.875 ************************************ 00:26:29.875 START TEST raid5f_rebuild_test 00:26:29.875 ************************************ 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:29.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82870 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82870 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82870 ']' 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:29.875 17:14:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.875 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:29.875 Zero copy mechanism will not be used. 00:26:29.875 [2024-11-08 17:14:06.451733] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:26:29.875 [2024-11-08 17:14:06.451879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82870 ] 00:26:30.133 [2024-11-08 17:14:06.620293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.133 [2024-11-08 17:14:06.738318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.390 [2024-11-08 17:14:06.885764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:30.390 [2024-11-08 17:14:06.886000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:30.649 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:30.649 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:26:30.649 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:30.649 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:30.649 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.649 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 BaseBdev1_malloc 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 [2024-11-08 17:14:07.388673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:30.907 [2024-11-08 17:14:07.388862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.907 [2024-11-08 17:14:07.388891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:30.907 [2024-11-08 17:14:07.388904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.907 [2024-11-08 17:14:07.391196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.907 [2024-11-08 17:14:07.391231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:30.907 BaseBdev1 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 BaseBdev2_malloc 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 [2024-11-08 17:14:07.426348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:30.907 [2024-11-08 17:14:07.426398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.907 [2024-11-08 17:14:07.426416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:30.907 [2024-11-08 17:14:07.426426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.907 [2024-11-08 17:14:07.428627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.907 [2024-11-08 17:14:07.428768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:30.907 BaseBdev2 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 BaseBdev3_malloc 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 [2024-11-08 17:14:07.481564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:30.907 [2024-11-08 17:14:07.481734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.907 [2024-11-08 17:14:07.481780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:30.907 [2024-11-08 17:14:07.481795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.907 [2024-11-08 17:14:07.484087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.907 [2024-11-08 17:14:07.484124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:30.907 BaseBdev3 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 BaseBdev4_malloc 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.907 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.907 [2024-11-08 17:14:07.524064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:30.907 [2024-11-08 17:14:07.524212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.907 [2024-11-08 17:14:07.524238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:30.907 [2024-11-08 17:14:07.524250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.907 [2024-11-08 17:14:07.526486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.908 [2024-11-08 17:14:07.526520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:30.908 BaseBdev4 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.908 spare_malloc 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.908 spare_delay 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.908 [2024-11-08 17:14:07.577938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:30.908 [2024-11-08 17:14:07.577989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.908 [2024-11-08 17:14:07.578008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:30.908 [2024-11-08 17:14:07.578019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.908 [2024-11-08 17:14:07.580230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.908 [2024-11-08 17:14:07.580264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:30.908 spare 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.908 [2024-11-08 17:14:07.585999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:30.908 [2024-11-08 17:14:07.587983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:30.908 [2024-11-08 17:14:07.588045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:30.908 [2024-11-08 17:14:07.588098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:30.908 [2024-11-08 17:14:07.588184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:30.908 [2024-11-08 17:14:07.588196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:30.908 [2024-11-08 17:14:07.588462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:30.908 [2024-11-08 17:14:07.593495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:30.908 [2024-11-08 17:14:07.593513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:30.908 [2024-11-08 17:14:07.593716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.908 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.166 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.166 "name": "raid_bdev1", 00:26:31.166 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:31.166 "strip_size_kb": 64, 00:26:31.166 "state": "online", 00:26:31.166 "raid_level": "raid5f", 00:26:31.166 "superblock": false, 00:26:31.166 "num_base_bdevs": 4, 00:26:31.166 "num_base_bdevs_discovered": 4, 00:26:31.166 "num_base_bdevs_operational": 4, 00:26:31.166 "base_bdevs_list": [ 00:26:31.166 { 00:26:31.166 "name": "BaseBdev1", 00:26:31.166 "uuid": "9d7a4842-298c-5c82-98a3-2ac2b954eb28", 00:26:31.166 "is_configured": true, 00:26:31.166 "data_offset": 0, 00:26:31.166 "data_size": 65536 00:26:31.166 }, 00:26:31.166 { 00:26:31.166 "name": "BaseBdev2", 00:26:31.166 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:31.166 "is_configured": true, 00:26:31.166 "data_offset": 0, 00:26:31.166 "data_size": 65536 00:26:31.166 }, 00:26:31.166 { 00:26:31.166 "name": "BaseBdev3", 00:26:31.166 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:31.166 "is_configured": true, 00:26:31.166 "data_offset": 0, 00:26:31.166 "data_size": 65536 00:26:31.166 }, 00:26:31.166 { 00:26:31.166 "name": "BaseBdev4", 00:26:31.166 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:31.166 "is_configured": true, 00:26:31.166 "data_offset": 0, 00:26:31.166 "data_size": 65536 00:26:31.166 } 00:26:31.166 ] 00:26:31.166 }' 00:26:31.167 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.167 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:31.425 [2024-11-08 17:14:07.927707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:31.425 17:14:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:31.683 [2024-11-08 17:14:08.179571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:31.683 /dev/nbd0 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:31.683 1+0 records in 00:26:31.683 1+0 records out 00:26:31.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626355 s, 6.5 MB/s 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:26:31.683 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:26:32.249 512+0 records in 00:26:32.249 512+0 records out 00:26:32.249 100663296 bytes (101 MB, 96 MiB) copied, 0.70105 s, 144 MB/s 00:26:32.249 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:32.249 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:32.249 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:32.249 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:32.249 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:32.249 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:32.249 17:14:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:32.508 [2024-11-08 17:14:09.184347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.508 [2024-11-08 17:14:09.202167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.508 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.771 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.771 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.771 "name": "raid_bdev1", 00:26:32.771 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:32.771 "strip_size_kb": 64, 00:26:32.771 "state": "online", 00:26:32.771 "raid_level": "raid5f", 00:26:32.771 "superblock": false, 00:26:32.771 "num_base_bdevs": 4, 00:26:32.771 "num_base_bdevs_discovered": 3, 00:26:32.771 "num_base_bdevs_operational": 3, 00:26:32.771 "base_bdevs_list": [ 00:26:32.771 { 00:26:32.771 "name": null, 00:26:32.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.771 "is_configured": false, 00:26:32.771 "data_offset": 0, 00:26:32.771 "data_size": 65536 00:26:32.771 }, 00:26:32.771 { 00:26:32.771 "name": "BaseBdev2", 00:26:32.771 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:32.771 "is_configured": true, 00:26:32.771 "data_offset": 0, 00:26:32.771 "data_size": 65536 00:26:32.771 }, 00:26:32.771 { 00:26:32.771 "name": "BaseBdev3", 00:26:32.771 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:32.771 "is_configured": true, 00:26:32.771 "data_offset": 0, 00:26:32.771 "data_size": 65536 00:26:32.771 }, 00:26:32.771 { 00:26:32.771 "name": "BaseBdev4", 00:26:32.771 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:32.771 "is_configured": true, 00:26:32.771 "data_offset": 0, 00:26:32.771 "data_size": 65536 00:26:32.771 } 00:26:32.771 ] 00:26:32.771 }' 00:26:32.771 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.771 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.027 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:33.027 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.027 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.027 [2024-11-08 17:14:09.530267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:33.027 [2024-11-08 17:14:09.541356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:26:33.027 17:14:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.027 17:14:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:33.027 [2024-11-08 17:14:09.548354] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:34.006 "name": "raid_bdev1", 00:26:34.006 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:34.006 "strip_size_kb": 64, 00:26:34.006 "state": "online", 00:26:34.006 "raid_level": "raid5f", 00:26:34.006 "superblock": false, 00:26:34.006 "num_base_bdevs": 4, 00:26:34.006 "num_base_bdevs_discovered": 4, 00:26:34.006 "num_base_bdevs_operational": 4, 00:26:34.006 "process": { 00:26:34.006 "type": "rebuild", 00:26:34.006 "target": "spare", 00:26:34.006 "progress": { 00:26:34.006 "blocks": 17280, 00:26:34.006 "percent": 8 00:26:34.006 } 00:26:34.006 }, 00:26:34.006 "base_bdevs_list": [ 00:26:34.006 { 00:26:34.006 "name": "spare", 00:26:34.006 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:34.006 "is_configured": true, 00:26:34.006 "data_offset": 0, 00:26:34.006 "data_size": 65536 00:26:34.006 }, 00:26:34.006 { 00:26:34.006 "name": "BaseBdev2", 00:26:34.006 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:34.006 "is_configured": true, 00:26:34.006 "data_offset": 0, 00:26:34.006 "data_size": 65536 00:26:34.006 }, 00:26:34.006 { 00:26:34.006 "name": "BaseBdev3", 00:26:34.006 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:34.006 "is_configured": true, 00:26:34.006 "data_offset": 0, 00:26:34.006 "data_size": 65536 00:26:34.006 }, 00:26:34.006 { 00:26:34.006 "name": "BaseBdev4", 00:26:34.006 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:34.006 "is_configured": true, 00:26:34.006 "data_offset": 0, 00:26:34.006 "data_size": 65536 00:26:34.006 } 00:26:34.006 ] 00:26:34.006 }' 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.006 [2024-11-08 17:14:10.657306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:34.006 [2024-11-08 17:14:10.658447] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:34.006 [2024-11-08 17:14:10.658594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.006 [2024-11-08 17:14:10.658656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:34.006 [2024-11-08 17:14:10.658685] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.006 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:34.006 "name": "raid_bdev1", 00:26:34.006 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:34.006 "strip_size_kb": 64, 00:26:34.006 "state": "online", 00:26:34.006 "raid_level": "raid5f", 00:26:34.007 "superblock": false, 00:26:34.007 "num_base_bdevs": 4, 00:26:34.007 "num_base_bdevs_discovered": 3, 00:26:34.007 "num_base_bdevs_operational": 3, 00:26:34.007 "base_bdevs_list": [ 00:26:34.007 { 00:26:34.007 "name": null, 00:26:34.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.007 "is_configured": false, 00:26:34.007 "data_offset": 0, 00:26:34.007 "data_size": 65536 00:26:34.007 }, 00:26:34.007 { 00:26:34.007 "name": "BaseBdev2", 00:26:34.007 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:34.007 "is_configured": true, 00:26:34.007 "data_offset": 0, 00:26:34.007 "data_size": 65536 00:26:34.007 }, 00:26:34.007 { 00:26:34.007 "name": "BaseBdev3", 00:26:34.007 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:34.007 "is_configured": true, 00:26:34.007 "data_offset": 0, 00:26:34.007 "data_size": 65536 00:26:34.007 }, 00:26:34.007 { 00:26:34.007 "name": "BaseBdev4", 00:26:34.007 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:34.007 "is_configured": true, 00:26:34.007 "data_offset": 0, 00:26:34.007 "data_size": 65536 00:26:34.007 } 00:26:34.007 ] 00:26:34.007 }' 00:26:34.007 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:34.007 17:14:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.573 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:34.573 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:34.573 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:34.573 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:34.573 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:34.573 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.573 17:14:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.573 17:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.573 17:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.573 17:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.573 17:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:34.573 "name": "raid_bdev1", 00:26:34.573 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:34.573 "strip_size_kb": 64, 00:26:34.573 "state": "online", 00:26:34.573 "raid_level": "raid5f", 00:26:34.573 "superblock": false, 00:26:34.573 "num_base_bdevs": 4, 00:26:34.573 "num_base_bdevs_discovered": 3, 00:26:34.573 "num_base_bdevs_operational": 3, 00:26:34.573 "base_bdevs_list": [ 00:26:34.573 { 00:26:34.573 "name": null, 00:26:34.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.573 "is_configured": false, 00:26:34.573 "data_offset": 0, 00:26:34.573 "data_size": 65536 00:26:34.573 }, 00:26:34.573 { 00:26:34.573 "name": "BaseBdev2", 00:26:34.573 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:34.573 "is_configured": true, 00:26:34.573 "data_offset": 0, 00:26:34.573 "data_size": 65536 00:26:34.573 }, 00:26:34.573 { 00:26:34.573 "name": "BaseBdev3", 00:26:34.573 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:34.573 "is_configured": true, 00:26:34.573 "data_offset": 0, 00:26:34.573 "data_size": 65536 00:26:34.573 }, 00:26:34.573 { 00:26:34.573 "name": "BaseBdev4", 00:26:34.573 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:34.573 "is_configured": true, 00:26:34.573 "data_offset": 0, 00:26:34.573 "data_size": 65536 00:26:34.573 } 00:26:34.573 ] 00:26:34.573 }' 00:26:34.573 17:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.574 [2024-11-08 17:14:11.095118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:34.574 [2024-11-08 17:14:11.105182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.574 17:14:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:34.574 [2024-11-08 17:14:11.111924] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:35.505 "name": "raid_bdev1", 00:26:35.505 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:35.505 "strip_size_kb": 64, 00:26:35.505 "state": "online", 00:26:35.505 "raid_level": "raid5f", 00:26:35.505 "superblock": false, 00:26:35.505 "num_base_bdevs": 4, 00:26:35.505 "num_base_bdevs_discovered": 4, 00:26:35.505 "num_base_bdevs_operational": 4, 00:26:35.505 "process": { 00:26:35.505 "type": "rebuild", 00:26:35.505 "target": "spare", 00:26:35.505 "progress": { 00:26:35.505 "blocks": 17280, 00:26:35.505 "percent": 8 00:26:35.505 } 00:26:35.505 }, 00:26:35.505 "base_bdevs_list": [ 00:26:35.505 { 00:26:35.505 "name": "spare", 00:26:35.505 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:35.505 "is_configured": true, 00:26:35.505 "data_offset": 0, 00:26:35.505 "data_size": 65536 00:26:35.505 }, 00:26:35.505 { 00:26:35.505 "name": "BaseBdev2", 00:26:35.505 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:35.505 "is_configured": true, 00:26:35.505 "data_offset": 0, 00:26:35.505 "data_size": 65536 00:26:35.505 }, 00:26:35.505 { 00:26:35.505 "name": "BaseBdev3", 00:26:35.505 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:35.505 "is_configured": true, 00:26:35.505 "data_offset": 0, 00:26:35.505 "data_size": 65536 00:26:35.505 }, 00:26:35.505 { 00:26:35.505 "name": "BaseBdev4", 00:26:35.505 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:35.505 "is_configured": true, 00:26:35.505 "data_offset": 0, 00:26:35.505 "data_size": 65536 00:26:35.505 } 00:26:35.505 ] 00:26:35.505 }' 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=540 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.505 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.506 17:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.763 17:14:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.763 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:35.763 "name": "raid_bdev1", 00:26:35.763 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:35.763 "strip_size_kb": 64, 00:26:35.763 "state": "online", 00:26:35.763 "raid_level": "raid5f", 00:26:35.763 "superblock": false, 00:26:35.763 "num_base_bdevs": 4, 00:26:35.763 "num_base_bdevs_discovered": 4, 00:26:35.763 "num_base_bdevs_operational": 4, 00:26:35.763 "process": { 00:26:35.763 "type": "rebuild", 00:26:35.763 "target": "spare", 00:26:35.763 "progress": { 00:26:35.763 "blocks": 19200, 00:26:35.763 "percent": 9 00:26:35.763 } 00:26:35.763 }, 00:26:35.763 "base_bdevs_list": [ 00:26:35.763 { 00:26:35.763 "name": "spare", 00:26:35.763 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:35.763 "is_configured": true, 00:26:35.763 "data_offset": 0, 00:26:35.763 "data_size": 65536 00:26:35.763 }, 00:26:35.763 { 00:26:35.763 "name": "BaseBdev2", 00:26:35.763 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:35.763 "is_configured": true, 00:26:35.763 "data_offset": 0, 00:26:35.763 "data_size": 65536 00:26:35.763 }, 00:26:35.763 { 00:26:35.763 "name": "BaseBdev3", 00:26:35.763 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:35.763 "is_configured": true, 00:26:35.763 "data_offset": 0, 00:26:35.763 "data_size": 65536 00:26:35.763 }, 00:26:35.763 { 00:26:35.763 "name": "BaseBdev4", 00:26:35.763 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:35.763 "is_configured": true, 00:26:35.763 "data_offset": 0, 00:26:35.763 "data_size": 65536 00:26:35.763 } 00:26:35.763 ] 00:26:35.763 }' 00:26:35.763 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:35.763 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:35.764 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:35.764 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:35.764 17:14:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:36.697 "name": "raid_bdev1", 00:26:36.697 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:36.697 "strip_size_kb": 64, 00:26:36.697 "state": "online", 00:26:36.697 "raid_level": "raid5f", 00:26:36.697 "superblock": false, 00:26:36.697 "num_base_bdevs": 4, 00:26:36.697 "num_base_bdevs_discovered": 4, 00:26:36.697 "num_base_bdevs_operational": 4, 00:26:36.697 "process": { 00:26:36.697 "type": "rebuild", 00:26:36.697 "target": "spare", 00:26:36.697 "progress": { 00:26:36.697 "blocks": 40320, 00:26:36.697 "percent": 20 00:26:36.697 } 00:26:36.697 }, 00:26:36.697 "base_bdevs_list": [ 00:26:36.697 { 00:26:36.697 "name": "spare", 00:26:36.697 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:36.697 "is_configured": true, 00:26:36.697 "data_offset": 0, 00:26:36.697 "data_size": 65536 00:26:36.697 }, 00:26:36.697 { 00:26:36.697 "name": "BaseBdev2", 00:26:36.697 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:36.697 "is_configured": true, 00:26:36.697 "data_offset": 0, 00:26:36.697 "data_size": 65536 00:26:36.697 }, 00:26:36.697 { 00:26:36.697 "name": "BaseBdev3", 00:26:36.697 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:36.697 "is_configured": true, 00:26:36.697 "data_offset": 0, 00:26:36.697 "data_size": 65536 00:26:36.697 }, 00:26:36.697 { 00:26:36.697 "name": "BaseBdev4", 00:26:36.697 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:36.697 "is_configured": true, 00:26:36.697 "data_offset": 0, 00:26:36.697 "data_size": 65536 00:26:36.697 } 00:26:36.697 ] 00:26:36.697 }' 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:36.697 17:14:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:38.082 "name": "raid_bdev1", 00:26:38.082 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:38.082 "strip_size_kb": 64, 00:26:38.082 "state": "online", 00:26:38.082 "raid_level": "raid5f", 00:26:38.082 "superblock": false, 00:26:38.082 "num_base_bdevs": 4, 00:26:38.082 "num_base_bdevs_discovered": 4, 00:26:38.082 "num_base_bdevs_operational": 4, 00:26:38.082 "process": { 00:26:38.082 "type": "rebuild", 00:26:38.082 "target": "spare", 00:26:38.082 "progress": { 00:26:38.082 "blocks": 61440, 00:26:38.082 "percent": 31 00:26:38.082 } 00:26:38.082 }, 00:26:38.082 "base_bdevs_list": [ 00:26:38.082 { 00:26:38.082 "name": "spare", 00:26:38.082 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:38.082 "is_configured": true, 00:26:38.082 "data_offset": 0, 00:26:38.082 "data_size": 65536 00:26:38.082 }, 00:26:38.082 { 00:26:38.082 "name": "BaseBdev2", 00:26:38.082 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:38.082 "is_configured": true, 00:26:38.082 "data_offset": 0, 00:26:38.082 "data_size": 65536 00:26:38.082 }, 00:26:38.082 { 00:26:38.082 "name": "BaseBdev3", 00:26:38.082 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:38.082 "is_configured": true, 00:26:38.082 "data_offset": 0, 00:26:38.082 "data_size": 65536 00:26:38.082 }, 00:26:38.082 { 00:26:38.082 "name": "BaseBdev4", 00:26:38.082 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:38.082 "is_configured": true, 00:26:38.082 "data_offset": 0, 00:26:38.082 "data_size": 65536 00:26:38.082 } 00:26:38.082 ] 00:26:38.082 }' 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:38.082 17:14:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:39.015 "name": "raid_bdev1", 00:26:39.015 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:39.015 "strip_size_kb": 64, 00:26:39.015 "state": "online", 00:26:39.015 "raid_level": "raid5f", 00:26:39.015 "superblock": false, 00:26:39.015 "num_base_bdevs": 4, 00:26:39.015 "num_base_bdevs_discovered": 4, 00:26:39.015 "num_base_bdevs_operational": 4, 00:26:39.015 "process": { 00:26:39.015 "type": "rebuild", 00:26:39.015 "target": "spare", 00:26:39.015 "progress": { 00:26:39.015 "blocks": 82560, 00:26:39.015 "percent": 41 00:26:39.015 } 00:26:39.015 }, 00:26:39.015 "base_bdevs_list": [ 00:26:39.015 { 00:26:39.015 "name": "spare", 00:26:39.015 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:39.015 "is_configured": true, 00:26:39.015 "data_offset": 0, 00:26:39.015 "data_size": 65536 00:26:39.015 }, 00:26:39.015 { 00:26:39.015 "name": "BaseBdev2", 00:26:39.015 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:39.015 "is_configured": true, 00:26:39.015 "data_offset": 0, 00:26:39.015 "data_size": 65536 00:26:39.015 }, 00:26:39.015 { 00:26:39.015 "name": "BaseBdev3", 00:26:39.015 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:39.015 "is_configured": true, 00:26:39.015 "data_offset": 0, 00:26:39.015 "data_size": 65536 00:26:39.015 }, 00:26:39.015 { 00:26:39.015 "name": "BaseBdev4", 00:26:39.015 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:39.015 "is_configured": true, 00:26:39.015 "data_offset": 0, 00:26:39.015 "data_size": 65536 00:26:39.015 } 00:26:39.015 ] 00:26:39.015 }' 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:39.015 17:14:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.947 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:39.947 "name": "raid_bdev1", 00:26:39.947 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:39.947 "strip_size_kb": 64, 00:26:39.947 "state": "online", 00:26:39.947 "raid_level": "raid5f", 00:26:39.947 "superblock": false, 00:26:39.947 "num_base_bdevs": 4, 00:26:39.947 "num_base_bdevs_discovered": 4, 00:26:39.947 "num_base_bdevs_operational": 4, 00:26:39.948 "process": { 00:26:39.948 "type": "rebuild", 00:26:39.948 "target": "spare", 00:26:39.948 "progress": { 00:26:39.948 "blocks": 103680, 00:26:39.948 "percent": 52 00:26:39.948 } 00:26:39.948 }, 00:26:39.948 "base_bdevs_list": [ 00:26:39.948 { 00:26:39.948 "name": "spare", 00:26:39.948 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:39.948 "is_configured": true, 00:26:39.948 "data_offset": 0, 00:26:39.948 "data_size": 65536 00:26:39.948 }, 00:26:39.948 { 00:26:39.948 "name": "BaseBdev2", 00:26:39.948 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:39.948 "is_configured": true, 00:26:39.948 "data_offset": 0, 00:26:39.948 "data_size": 65536 00:26:39.948 }, 00:26:39.948 { 00:26:39.948 "name": "BaseBdev3", 00:26:39.948 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:39.948 "is_configured": true, 00:26:39.948 "data_offset": 0, 00:26:39.948 "data_size": 65536 00:26:39.948 }, 00:26:39.948 { 00:26:39.948 "name": "BaseBdev4", 00:26:39.948 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:39.948 "is_configured": true, 00:26:39.948 "data_offset": 0, 00:26:39.948 "data_size": 65536 00:26:39.948 } 00:26:39.948 ] 00:26:39.948 }' 00:26:39.948 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:40.205 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.205 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:40.205 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.205 17:14:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:41.138 "name": "raid_bdev1", 00:26:41.138 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:41.138 "strip_size_kb": 64, 00:26:41.138 "state": "online", 00:26:41.138 "raid_level": "raid5f", 00:26:41.138 "superblock": false, 00:26:41.138 "num_base_bdevs": 4, 00:26:41.138 "num_base_bdevs_discovered": 4, 00:26:41.138 "num_base_bdevs_operational": 4, 00:26:41.138 "process": { 00:26:41.138 "type": "rebuild", 00:26:41.138 "target": "spare", 00:26:41.138 "progress": { 00:26:41.138 "blocks": 124800, 00:26:41.138 "percent": 63 00:26:41.138 } 00:26:41.138 }, 00:26:41.138 "base_bdevs_list": [ 00:26:41.138 { 00:26:41.138 "name": "spare", 00:26:41.138 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:41.138 "is_configured": true, 00:26:41.138 "data_offset": 0, 00:26:41.138 "data_size": 65536 00:26:41.138 }, 00:26:41.138 { 00:26:41.138 "name": "BaseBdev2", 00:26:41.138 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:41.138 "is_configured": true, 00:26:41.138 "data_offset": 0, 00:26:41.138 "data_size": 65536 00:26:41.138 }, 00:26:41.138 { 00:26:41.138 "name": "BaseBdev3", 00:26:41.138 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:41.138 "is_configured": true, 00:26:41.138 "data_offset": 0, 00:26:41.138 "data_size": 65536 00:26:41.138 }, 00:26:41.138 { 00:26:41.138 "name": "BaseBdev4", 00:26:41.138 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:41.138 "is_configured": true, 00:26:41.138 "data_offset": 0, 00:26:41.138 "data_size": 65536 00:26:41.138 } 00:26:41.138 ] 00:26:41.138 }' 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:41.138 17:14:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:42.541 "name": "raid_bdev1", 00:26:42.541 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:42.541 "strip_size_kb": 64, 00:26:42.541 "state": "online", 00:26:42.541 "raid_level": "raid5f", 00:26:42.541 "superblock": false, 00:26:42.541 "num_base_bdevs": 4, 00:26:42.541 "num_base_bdevs_discovered": 4, 00:26:42.541 "num_base_bdevs_operational": 4, 00:26:42.541 "process": { 00:26:42.541 "type": "rebuild", 00:26:42.541 "target": "spare", 00:26:42.541 "progress": { 00:26:42.541 "blocks": 145920, 00:26:42.541 "percent": 74 00:26:42.541 } 00:26:42.541 }, 00:26:42.541 "base_bdevs_list": [ 00:26:42.541 { 00:26:42.541 "name": "spare", 00:26:42.541 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:42.541 "is_configured": true, 00:26:42.541 "data_offset": 0, 00:26:42.541 "data_size": 65536 00:26:42.541 }, 00:26:42.541 { 00:26:42.541 "name": "BaseBdev2", 00:26:42.541 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:42.541 "is_configured": true, 00:26:42.541 "data_offset": 0, 00:26:42.541 "data_size": 65536 00:26:42.541 }, 00:26:42.541 { 00:26:42.541 "name": "BaseBdev3", 00:26:42.541 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:42.541 "is_configured": true, 00:26:42.541 "data_offset": 0, 00:26:42.541 "data_size": 65536 00:26:42.541 }, 00:26:42.541 { 00:26:42.541 "name": "BaseBdev4", 00:26:42.541 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:42.541 "is_configured": true, 00:26:42.541 "data_offset": 0, 00:26:42.541 "data_size": 65536 00:26:42.541 } 00:26:42.541 ] 00:26:42.541 }' 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:42.541 17:14:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:43.474 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:43.474 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:43.474 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:43.474 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:43.474 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:43.475 "name": "raid_bdev1", 00:26:43.475 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:43.475 "strip_size_kb": 64, 00:26:43.475 "state": "online", 00:26:43.475 "raid_level": "raid5f", 00:26:43.475 "superblock": false, 00:26:43.475 "num_base_bdevs": 4, 00:26:43.475 "num_base_bdevs_discovered": 4, 00:26:43.475 "num_base_bdevs_operational": 4, 00:26:43.475 "process": { 00:26:43.475 "type": "rebuild", 00:26:43.475 "target": "spare", 00:26:43.475 "progress": { 00:26:43.475 "blocks": 167040, 00:26:43.475 "percent": 84 00:26:43.475 } 00:26:43.475 }, 00:26:43.475 "base_bdevs_list": [ 00:26:43.475 { 00:26:43.475 "name": "spare", 00:26:43.475 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:43.475 "is_configured": true, 00:26:43.475 "data_offset": 0, 00:26:43.475 "data_size": 65536 00:26:43.475 }, 00:26:43.475 { 00:26:43.475 "name": "BaseBdev2", 00:26:43.475 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:43.475 "is_configured": true, 00:26:43.475 "data_offset": 0, 00:26:43.475 "data_size": 65536 00:26:43.475 }, 00:26:43.475 { 00:26:43.475 "name": "BaseBdev3", 00:26:43.475 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:43.475 "is_configured": true, 00:26:43.475 "data_offset": 0, 00:26:43.475 "data_size": 65536 00:26:43.475 }, 00:26:43.475 { 00:26:43.475 "name": "BaseBdev4", 00:26:43.475 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:43.475 "is_configured": true, 00:26:43.475 "data_offset": 0, 00:26:43.475 "data_size": 65536 00:26:43.475 } 00:26:43.475 ] 00:26:43.475 }' 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:43.475 17:14:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:43.475 17:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.475 17:14:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.408 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:44.408 "name": "raid_bdev1", 00:26:44.408 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:44.408 "strip_size_kb": 64, 00:26:44.408 "state": "online", 00:26:44.408 "raid_level": "raid5f", 00:26:44.408 "superblock": false, 00:26:44.408 "num_base_bdevs": 4, 00:26:44.408 "num_base_bdevs_discovered": 4, 00:26:44.408 "num_base_bdevs_operational": 4, 00:26:44.408 "process": { 00:26:44.408 "type": "rebuild", 00:26:44.408 "target": "spare", 00:26:44.408 "progress": { 00:26:44.408 "blocks": 188160, 00:26:44.408 "percent": 95 00:26:44.408 } 00:26:44.408 }, 00:26:44.408 "base_bdevs_list": [ 00:26:44.408 { 00:26:44.408 "name": "spare", 00:26:44.408 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:44.408 "is_configured": true, 00:26:44.408 "data_offset": 0, 00:26:44.408 "data_size": 65536 00:26:44.408 }, 00:26:44.408 { 00:26:44.408 "name": "BaseBdev2", 00:26:44.408 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:44.408 "is_configured": true, 00:26:44.408 "data_offset": 0, 00:26:44.409 "data_size": 65536 00:26:44.409 }, 00:26:44.409 { 00:26:44.409 "name": "BaseBdev3", 00:26:44.409 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:44.409 "is_configured": true, 00:26:44.409 "data_offset": 0, 00:26:44.409 "data_size": 65536 00:26:44.409 }, 00:26:44.409 { 00:26:44.409 "name": "BaseBdev4", 00:26:44.409 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:44.409 "is_configured": true, 00:26:44.409 "data_offset": 0, 00:26:44.409 "data_size": 65536 00:26:44.409 } 00:26:44.409 ] 00:26:44.409 }' 00:26:44.409 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:44.409 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:44.409 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:44.666 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:44.666 17:14:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:44.924 [2024-11-08 17:14:21.500291] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:44.924 [2024-11-08 17:14:21.500600] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:44.924 [2024-11-08 17:14:21.500662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.489 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:45.489 "name": "raid_bdev1", 00:26:45.489 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:45.489 "strip_size_kb": 64, 00:26:45.489 "state": "online", 00:26:45.489 "raid_level": "raid5f", 00:26:45.489 "superblock": false, 00:26:45.489 "num_base_bdevs": 4, 00:26:45.489 "num_base_bdevs_discovered": 4, 00:26:45.489 "num_base_bdevs_operational": 4, 00:26:45.489 "base_bdevs_list": [ 00:26:45.489 { 00:26:45.489 "name": "spare", 00:26:45.489 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:45.489 "is_configured": true, 00:26:45.489 "data_offset": 0, 00:26:45.489 "data_size": 65536 00:26:45.489 }, 00:26:45.489 { 00:26:45.489 "name": "BaseBdev2", 00:26:45.489 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:45.489 "is_configured": true, 00:26:45.489 "data_offset": 0, 00:26:45.489 "data_size": 65536 00:26:45.490 }, 00:26:45.490 { 00:26:45.490 "name": "BaseBdev3", 00:26:45.490 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:45.490 "is_configured": true, 00:26:45.490 "data_offset": 0, 00:26:45.490 "data_size": 65536 00:26:45.490 }, 00:26:45.490 { 00:26:45.490 "name": "BaseBdev4", 00:26:45.490 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:45.490 "is_configured": true, 00:26:45.490 "data_offset": 0, 00:26:45.490 "data_size": 65536 00:26:45.490 } 00:26:45.490 ] 00:26:45.490 }' 00:26:45.490 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:45.748 "name": "raid_bdev1", 00:26:45.748 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:45.748 "strip_size_kb": 64, 00:26:45.748 "state": "online", 00:26:45.748 "raid_level": "raid5f", 00:26:45.748 "superblock": false, 00:26:45.748 "num_base_bdevs": 4, 00:26:45.748 "num_base_bdevs_discovered": 4, 00:26:45.748 "num_base_bdevs_operational": 4, 00:26:45.748 "base_bdevs_list": [ 00:26:45.748 { 00:26:45.748 "name": "spare", 00:26:45.748 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:45.748 "is_configured": true, 00:26:45.748 "data_offset": 0, 00:26:45.748 "data_size": 65536 00:26:45.748 }, 00:26:45.748 { 00:26:45.748 "name": "BaseBdev2", 00:26:45.748 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:45.748 "is_configured": true, 00:26:45.748 "data_offset": 0, 00:26:45.748 "data_size": 65536 00:26:45.748 }, 00:26:45.748 { 00:26:45.748 "name": "BaseBdev3", 00:26:45.748 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:45.748 "is_configured": true, 00:26:45.748 "data_offset": 0, 00:26:45.748 "data_size": 65536 00:26:45.748 }, 00:26:45.748 { 00:26:45.748 "name": "BaseBdev4", 00:26:45.748 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:45.748 "is_configured": true, 00:26:45.748 "data_offset": 0, 00:26:45.748 "data_size": 65536 00:26:45.748 } 00:26:45.748 ] 00:26:45.748 }' 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.748 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:45.748 "name": "raid_bdev1", 00:26:45.748 "uuid": "e7e8682d-ccbb-4465-acd4-217d550f7790", 00:26:45.748 "strip_size_kb": 64, 00:26:45.748 "state": "online", 00:26:45.748 "raid_level": "raid5f", 00:26:45.749 "superblock": false, 00:26:45.749 "num_base_bdevs": 4, 00:26:45.749 "num_base_bdevs_discovered": 4, 00:26:45.749 "num_base_bdevs_operational": 4, 00:26:45.749 "base_bdevs_list": [ 00:26:45.749 { 00:26:45.749 "name": "spare", 00:26:45.749 "uuid": "2af92240-8b0d-503a-bfd3-ce2c0ec5d3f2", 00:26:45.749 "is_configured": true, 00:26:45.749 "data_offset": 0, 00:26:45.749 "data_size": 65536 00:26:45.749 }, 00:26:45.749 { 00:26:45.749 "name": "BaseBdev2", 00:26:45.749 "uuid": "bc8b2a74-9d66-58cf-a011-8e97d8ba6063", 00:26:45.749 "is_configured": true, 00:26:45.749 "data_offset": 0, 00:26:45.749 "data_size": 65536 00:26:45.749 }, 00:26:45.749 { 00:26:45.749 "name": "BaseBdev3", 00:26:45.749 "uuid": "3fc57907-d890-5f6d-9ad9-f2275f1334c5", 00:26:45.749 "is_configured": true, 00:26:45.749 "data_offset": 0, 00:26:45.749 "data_size": 65536 00:26:45.749 }, 00:26:45.749 { 00:26:45.749 "name": "BaseBdev4", 00:26:45.749 "uuid": "584a12bf-adf4-5335-bbef-8c898e256cf6", 00:26:45.749 "is_configured": true, 00:26:45.749 "data_offset": 0, 00:26:45.749 "data_size": 65536 00:26:45.749 } 00:26:45.749 ] 00:26:45.749 }' 00:26:45.749 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:45.749 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.006 [2024-11-08 17:14:22.657492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:46.006 [2024-11-08 17:14:22.657526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:46.006 [2024-11-08 17:14:22.657625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:46.006 [2024-11-08 17:14:22.657737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:46.006 [2024-11-08 17:14:22.657749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:46.006 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:46.007 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:46.007 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:46.007 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:46.007 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:46.007 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:46.007 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:46.007 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:46.265 /dev/nbd0 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:46.523 17:14:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:46.523 1+0 records in 00:26:46.523 1+0 records out 00:26:46.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644727 s, 6.4 MB/s 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:46.523 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:46.523 /dev/nbd1 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:46.782 1+0 records in 00:26:46.782 1+0 records out 00:26:46.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501229 s, 8.2 MB/s 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.782 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:47.040 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82870 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82870 ']' 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82870 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82870 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:47.312 killing process with pid 82870 00:26:47.312 Received shutdown signal, test time was about 60.000000 seconds 00:26:47.312 00:26:47.312 Latency(us) 00:26:47.312 [2024-11-08T17:14:24.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.312 [2024-11-08T17:14:24.027Z] =================================================================================================================== 00:26:47.312 [2024-11-08T17:14:24.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82870' 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82870 00:26:47.312 17:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82870 00:26:47.312 [2024-11-08 17:14:23.930731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:47.606 [2024-11-08 17:14:24.255361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:26:48.542 00:26:48.542 real 0m18.640s 00:26:48.542 user 0m21.756s 00:26:48.542 sys 0m1.956s 00:26:48.542 ************************************ 00:26:48.542 END TEST raid5f_rebuild_test 00:26:48.542 ************************************ 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.542 17:14:25 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:26:48.542 17:14:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:26:48.542 17:14:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:48.542 17:14:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:48.542 ************************************ 00:26:48.542 START TEST raid5f_rebuild_test_sb 00:26:48.542 ************************************ 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:26:48.542 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:26:48.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=83381 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 83381 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83381 ']' 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.543 17:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:48.543 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:48.543 Zero copy mechanism will not be used. 00:26:48.543 [2024-11-08 17:14:25.159250] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:26:48.543 [2024-11-08 17:14:25.159390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83381 ] 00:26:48.801 [2024-11-08 17:14:25.317237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.801 [2024-11-08 17:14:25.435997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.093 [2024-11-08 17:14:25.584440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:49.093 [2024-11-08 17:14:25.584512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.355 BaseBdev1_malloc 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.355 [2024-11-08 17:14:26.050054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:49.355 [2024-11-08 17:14:26.050128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.355 [2024-11-08 17:14:26.050152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:49.355 [2024-11-08 17:14:26.050164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.355 [2024-11-08 17:14:26.052448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.355 [2024-11-08 17:14:26.052489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:49.355 BaseBdev1 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.355 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 BaseBdev2_malloc 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 [2024-11-08 17:14:26.088165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:49.616 [2024-11-08 17:14:26.088234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.616 [2024-11-08 17:14:26.088256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:49.616 [2024-11-08 17:14:26.088271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.616 [2024-11-08 17:14:26.090570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.616 [2024-11-08 17:14:26.090701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:49.616 BaseBdev2 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 BaseBdev3_malloc 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 [2024-11-08 17:14:26.139604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:49.616 [2024-11-08 17:14:26.139662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.616 [2024-11-08 17:14:26.139685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:49.616 [2024-11-08 17:14:26.139697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.616 [2024-11-08 17:14:26.141995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.616 [2024-11-08 17:14:26.142032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:49.616 BaseBdev3 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 BaseBdev4_malloc 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 [2024-11-08 17:14:26.177714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:49.616 [2024-11-08 17:14:26.177791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.616 [2024-11-08 17:14:26.177813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:49.616 [2024-11-08 17:14:26.177825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.616 [2024-11-08 17:14:26.180136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.616 [2024-11-08 17:14:26.180175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:49.616 BaseBdev4 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 spare_malloc 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 spare_delay 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.616 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 [2024-11-08 17:14:26.231906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:49.616 [2024-11-08 17:14:26.231969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.616 [2024-11-08 17:14:26.231992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:49.617 [2024-11-08 17:14:26.232003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.617 [2024-11-08 17:14:26.234313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.617 [2024-11-08 17:14:26.234354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:49.617 spare 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.617 [2024-11-08 17:14:26.239981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:49.617 [2024-11-08 17:14:26.241997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:49.617 [2024-11-08 17:14:26.242065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:49.617 [2024-11-08 17:14:26.242120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:49.617 [2024-11-08 17:14:26.242320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:49.617 [2024-11-08 17:14:26.242336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:49.617 [2024-11-08 17:14:26.242632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:49.617 [2024-11-08 17:14:26.247679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:49.617 [2024-11-08 17:14:26.247699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:49.617 [2024-11-08 17:14:26.247932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.617 "name": "raid_bdev1", 00:26:49.617 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:49.617 "strip_size_kb": 64, 00:26:49.617 "state": "online", 00:26:49.617 "raid_level": "raid5f", 00:26:49.617 "superblock": true, 00:26:49.617 "num_base_bdevs": 4, 00:26:49.617 "num_base_bdevs_discovered": 4, 00:26:49.617 "num_base_bdevs_operational": 4, 00:26:49.617 "base_bdevs_list": [ 00:26:49.617 { 00:26:49.617 "name": "BaseBdev1", 00:26:49.617 "uuid": "5b2cd563-3609-5630-a379-bf293ae39549", 00:26:49.617 "is_configured": true, 00:26:49.617 "data_offset": 2048, 00:26:49.617 "data_size": 63488 00:26:49.617 }, 00:26:49.617 { 00:26:49.617 "name": "BaseBdev2", 00:26:49.617 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:49.617 "is_configured": true, 00:26:49.617 "data_offset": 2048, 00:26:49.617 "data_size": 63488 00:26:49.617 }, 00:26:49.617 { 00:26:49.617 "name": "BaseBdev3", 00:26:49.617 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:49.617 "is_configured": true, 00:26:49.617 "data_offset": 2048, 00:26:49.617 "data_size": 63488 00:26:49.617 }, 00:26:49.617 { 00:26:49.617 "name": "BaseBdev4", 00:26:49.617 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:49.617 "is_configured": true, 00:26:49.617 "data_offset": 2048, 00:26:49.617 "data_size": 63488 00:26:49.617 } 00:26:49.617 ] 00:26:49.617 }' 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.617 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:49.945 [2024-11-08 17:14:26.582566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.945 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:50.203 [2024-11-08 17:14:26.826443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:50.203 /dev/nbd0 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:50.203 1+0 records in 00:26:50.203 1+0 records out 00:26:50.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298618 s, 13.7 MB/s 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:26:50.203 17:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:26:50.769 496+0 records in 00:26:50.769 496+0 records out 00:26:50.769 97517568 bytes (98 MB, 93 MiB) copied, 0.600208 s, 162 MB/s 00:26:50.769 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:50.769 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:50.769 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:50.769 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:51.025 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:51.026 [2024-11-08 17:14:27.675696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.026 [2024-11-08 17:14:27.702503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.026 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.283 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.283 "name": "raid_bdev1", 00:26:51.283 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:51.283 "strip_size_kb": 64, 00:26:51.283 "state": "online", 00:26:51.283 "raid_level": "raid5f", 00:26:51.283 "superblock": true, 00:26:51.283 "num_base_bdevs": 4, 00:26:51.283 "num_base_bdevs_discovered": 3, 00:26:51.283 "num_base_bdevs_operational": 3, 00:26:51.283 "base_bdevs_list": [ 00:26:51.283 { 00:26:51.283 "name": null, 00:26:51.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.283 "is_configured": false, 00:26:51.283 "data_offset": 0, 00:26:51.283 "data_size": 63488 00:26:51.283 }, 00:26:51.283 { 00:26:51.283 "name": "BaseBdev2", 00:26:51.283 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:51.283 "is_configured": true, 00:26:51.283 "data_offset": 2048, 00:26:51.283 "data_size": 63488 00:26:51.283 }, 00:26:51.283 { 00:26:51.283 "name": "BaseBdev3", 00:26:51.283 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:51.283 "is_configured": true, 00:26:51.283 "data_offset": 2048, 00:26:51.283 "data_size": 63488 00:26:51.283 }, 00:26:51.283 { 00:26:51.283 "name": "BaseBdev4", 00:26:51.283 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:51.283 "is_configured": true, 00:26:51.283 "data_offset": 2048, 00:26:51.283 "data_size": 63488 00:26:51.283 } 00:26:51.283 ] 00:26:51.283 }' 00:26:51.284 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.284 17:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.541 17:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:51.541 17:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.541 17:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.541 [2024-11-08 17:14:28.006594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:51.541 [2024-11-08 17:14:28.017250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:26:51.541 17:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.541 17:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:51.541 [2024-11-08 17:14:28.024551] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:52.475 "name": "raid_bdev1", 00:26:52.475 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:52.475 "strip_size_kb": 64, 00:26:52.475 "state": "online", 00:26:52.475 "raid_level": "raid5f", 00:26:52.475 "superblock": true, 00:26:52.475 "num_base_bdevs": 4, 00:26:52.475 "num_base_bdevs_discovered": 4, 00:26:52.475 "num_base_bdevs_operational": 4, 00:26:52.475 "process": { 00:26:52.475 "type": "rebuild", 00:26:52.475 "target": "spare", 00:26:52.475 "progress": { 00:26:52.475 "blocks": 17280, 00:26:52.475 "percent": 9 00:26:52.475 } 00:26:52.475 }, 00:26:52.475 "base_bdevs_list": [ 00:26:52.475 { 00:26:52.475 "name": "spare", 00:26:52.475 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:52.475 "is_configured": true, 00:26:52.475 "data_offset": 2048, 00:26:52.475 "data_size": 63488 00:26:52.475 }, 00:26:52.475 { 00:26:52.475 "name": "BaseBdev2", 00:26:52.475 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:52.475 "is_configured": true, 00:26:52.475 "data_offset": 2048, 00:26:52.475 "data_size": 63488 00:26:52.475 }, 00:26:52.475 { 00:26:52.475 "name": "BaseBdev3", 00:26:52.475 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:52.475 "is_configured": true, 00:26:52.475 "data_offset": 2048, 00:26:52.475 "data_size": 63488 00:26:52.475 }, 00:26:52.475 { 00:26:52.475 "name": "BaseBdev4", 00:26:52.475 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:52.475 "is_configured": true, 00:26:52.475 "data_offset": 2048, 00:26:52.475 "data_size": 63488 00:26:52.475 } 00:26:52.475 ] 00:26:52.475 }' 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.475 [2024-11-08 17:14:29.121545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:52.475 [2024-11-08 17:14:29.134378] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:52.475 [2024-11-08 17:14:29.134469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.475 [2024-11-08 17:14:29.134497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:52.475 [2024-11-08 17:14:29.134517] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.475 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.733 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.733 "name": "raid_bdev1", 00:26:52.733 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:52.733 "strip_size_kb": 64, 00:26:52.733 "state": "online", 00:26:52.733 "raid_level": "raid5f", 00:26:52.733 "superblock": true, 00:26:52.733 "num_base_bdevs": 4, 00:26:52.733 "num_base_bdevs_discovered": 3, 00:26:52.733 "num_base_bdevs_operational": 3, 00:26:52.733 "base_bdevs_list": [ 00:26:52.733 { 00:26:52.733 "name": null, 00:26:52.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.733 "is_configured": false, 00:26:52.733 "data_offset": 0, 00:26:52.733 "data_size": 63488 00:26:52.733 }, 00:26:52.733 { 00:26:52.733 "name": "BaseBdev2", 00:26:52.733 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:52.733 "is_configured": true, 00:26:52.733 "data_offset": 2048, 00:26:52.733 "data_size": 63488 00:26:52.733 }, 00:26:52.733 { 00:26:52.733 "name": "BaseBdev3", 00:26:52.733 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:52.733 "is_configured": true, 00:26:52.733 "data_offset": 2048, 00:26:52.733 "data_size": 63488 00:26:52.733 }, 00:26:52.733 { 00:26:52.733 "name": "BaseBdev4", 00:26:52.733 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:52.733 "is_configured": true, 00:26:52.733 "data_offset": 2048, 00:26:52.733 "data_size": 63488 00:26:52.733 } 00:26:52.733 ] 00:26:52.733 }' 00:26:52.733 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.733 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:52.990 "name": "raid_bdev1", 00:26:52.990 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:52.990 "strip_size_kb": 64, 00:26:52.990 "state": "online", 00:26:52.990 "raid_level": "raid5f", 00:26:52.990 "superblock": true, 00:26:52.990 "num_base_bdevs": 4, 00:26:52.990 "num_base_bdevs_discovered": 3, 00:26:52.990 "num_base_bdevs_operational": 3, 00:26:52.990 "base_bdevs_list": [ 00:26:52.990 { 00:26:52.990 "name": null, 00:26:52.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.990 "is_configured": false, 00:26:52.990 "data_offset": 0, 00:26:52.990 "data_size": 63488 00:26:52.990 }, 00:26:52.990 { 00:26:52.990 "name": "BaseBdev2", 00:26:52.990 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:52.990 "is_configured": true, 00:26:52.990 "data_offset": 2048, 00:26:52.990 "data_size": 63488 00:26:52.990 }, 00:26:52.990 { 00:26:52.990 "name": "BaseBdev3", 00:26:52.990 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:52.990 "is_configured": true, 00:26:52.990 "data_offset": 2048, 00:26:52.990 "data_size": 63488 00:26:52.990 }, 00:26:52.990 { 00:26:52.990 "name": "BaseBdev4", 00:26:52.990 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:52.990 "is_configured": true, 00:26:52.990 "data_offset": 2048, 00:26:52.990 "data_size": 63488 00:26:52.990 } 00:26:52.990 ] 00:26:52.990 }' 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.990 [2024-11-08 17:14:29.567125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:52.990 [2024-11-08 17:14:29.577381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.990 17:14:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:52.990 [2024-11-08 17:14:29.584583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:53.922 "name": "raid_bdev1", 00:26:53.922 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:53.922 "strip_size_kb": 64, 00:26:53.922 "state": "online", 00:26:53.922 "raid_level": "raid5f", 00:26:53.922 "superblock": true, 00:26:53.922 "num_base_bdevs": 4, 00:26:53.922 "num_base_bdevs_discovered": 4, 00:26:53.922 "num_base_bdevs_operational": 4, 00:26:53.922 "process": { 00:26:53.922 "type": "rebuild", 00:26:53.922 "target": "spare", 00:26:53.922 "progress": { 00:26:53.922 "blocks": 17280, 00:26:53.922 "percent": 9 00:26:53.922 } 00:26:53.922 }, 00:26:53.922 "base_bdevs_list": [ 00:26:53.922 { 00:26:53.922 "name": "spare", 00:26:53.922 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:53.922 "is_configured": true, 00:26:53.922 "data_offset": 2048, 00:26:53.922 "data_size": 63488 00:26:53.922 }, 00:26:53.922 { 00:26:53.922 "name": "BaseBdev2", 00:26:53.922 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:53.922 "is_configured": true, 00:26:53.922 "data_offset": 2048, 00:26:53.922 "data_size": 63488 00:26:53.922 }, 00:26:53.922 { 00:26:53.922 "name": "BaseBdev3", 00:26:53.922 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:53.922 "is_configured": true, 00:26:53.922 "data_offset": 2048, 00:26:53.922 "data_size": 63488 00:26:53.922 }, 00:26:53.922 { 00:26:53.922 "name": "BaseBdev4", 00:26:53.922 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:53.922 "is_configured": true, 00:26:53.922 "data_offset": 2048, 00:26:53.922 "data_size": 63488 00:26:53.922 } 00:26:53.922 ] 00:26:53.922 }' 00:26:53.922 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:26:54.180 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=558 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:54.180 "name": "raid_bdev1", 00:26:54.180 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:54.180 "strip_size_kb": 64, 00:26:54.180 "state": "online", 00:26:54.180 "raid_level": "raid5f", 00:26:54.180 "superblock": true, 00:26:54.180 "num_base_bdevs": 4, 00:26:54.180 "num_base_bdevs_discovered": 4, 00:26:54.180 "num_base_bdevs_operational": 4, 00:26:54.180 "process": { 00:26:54.180 "type": "rebuild", 00:26:54.180 "target": "spare", 00:26:54.180 "progress": { 00:26:54.180 "blocks": 19200, 00:26:54.180 "percent": 10 00:26:54.180 } 00:26:54.180 }, 00:26:54.180 "base_bdevs_list": [ 00:26:54.180 { 00:26:54.180 "name": "spare", 00:26:54.180 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:54.180 "is_configured": true, 00:26:54.180 "data_offset": 2048, 00:26:54.180 "data_size": 63488 00:26:54.180 }, 00:26:54.180 { 00:26:54.180 "name": "BaseBdev2", 00:26:54.180 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:54.180 "is_configured": true, 00:26:54.180 "data_offset": 2048, 00:26:54.180 "data_size": 63488 00:26:54.180 }, 00:26:54.180 { 00:26:54.180 "name": "BaseBdev3", 00:26:54.180 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:54.180 "is_configured": true, 00:26:54.180 "data_offset": 2048, 00:26:54.180 "data_size": 63488 00:26:54.180 }, 00:26:54.180 { 00:26:54.180 "name": "BaseBdev4", 00:26:54.180 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:54.180 "is_configured": true, 00:26:54.180 "data_offset": 2048, 00:26:54.180 "data_size": 63488 00:26:54.180 } 00:26:54.180 ] 00:26:54.180 }' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.180 17:14:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:55.115 "name": "raid_bdev1", 00:26:55.115 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:55.115 "strip_size_kb": 64, 00:26:55.115 "state": "online", 00:26:55.115 "raid_level": "raid5f", 00:26:55.115 "superblock": true, 00:26:55.115 "num_base_bdevs": 4, 00:26:55.115 "num_base_bdevs_discovered": 4, 00:26:55.115 "num_base_bdevs_operational": 4, 00:26:55.115 "process": { 00:26:55.115 "type": "rebuild", 00:26:55.115 "target": "spare", 00:26:55.115 "progress": { 00:26:55.115 "blocks": 40320, 00:26:55.115 "percent": 21 00:26:55.115 } 00:26:55.115 }, 00:26:55.115 "base_bdevs_list": [ 00:26:55.115 { 00:26:55.115 "name": "spare", 00:26:55.115 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:55.115 "is_configured": true, 00:26:55.115 "data_offset": 2048, 00:26:55.115 "data_size": 63488 00:26:55.115 }, 00:26:55.115 { 00:26:55.115 "name": "BaseBdev2", 00:26:55.115 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:55.115 "is_configured": true, 00:26:55.115 "data_offset": 2048, 00:26:55.115 "data_size": 63488 00:26:55.115 }, 00:26:55.115 { 00:26:55.115 "name": "BaseBdev3", 00:26:55.115 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:55.115 "is_configured": true, 00:26:55.115 "data_offset": 2048, 00:26:55.115 "data_size": 63488 00:26:55.115 }, 00:26:55.115 { 00:26:55.115 "name": "BaseBdev4", 00:26:55.115 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:55.115 "is_configured": true, 00:26:55.115 "data_offset": 2048, 00:26:55.115 "data_size": 63488 00:26:55.115 } 00:26:55.115 ] 00:26:55.115 }' 00:26:55.115 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:55.373 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.373 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:55.373 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.373 17:14:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:56.306 "name": "raid_bdev1", 00:26:56.306 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:56.306 "strip_size_kb": 64, 00:26:56.306 "state": "online", 00:26:56.306 "raid_level": "raid5f", 00:26:56.306 "superblock": true, 00:26:56.306 "num_base_bdevs": 4, 00:26:56.306 "num_base_bdevs_discovered": 4, 00:26:56.306 "num_base_bdevs_operational": 4, 00:26:56.306 "process": { 00:26:56.306 "type": "rebuild", 00:26:56.306 "target": "spare", 00:26:56.306 "progress": { 00:26:56.306 "blocks": 61440, 00:26:56.306 "percent": 32 00:26:56.306 } 00:26:56.306 }, 00:26:56.306 "base_bdevs_list": [ 00:26:56.306 { 00:26:56.306 "name": "spare", 00:26:56.306 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:56.306 "is_configured": true, 00:26:56.306 "data_offset": 2048, 00:26:56.306 "data_size": 63488 00:26:56.306 }, 00:26:56.306 { 00:26:56.306 "name": "BaseBdev2", 00:26:56.306 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:56.306 "is_configured": true, 00:26:56.306 "data_offset": 2048, 00:26:56.306 "data_size": 63488 00:26:56.306 }, 00:26:56.306 { 00:26:56.306 "name": "BaseBdev3", 00:26:56.306 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:56.306 "is_configured": true, 00:26:56.306 "data_offset": 2048, 00:26:56.306 "data_size": 63488 00:26:56.306 }, 00:26:56.306 { 00:26:56.306 "name": "BaseBdev4", 00:26:56.306 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:56.306 "is_configured": true, 00:26:56.306 "data_offset": 2048, 00:26:56.306 "data_size": 63488 00:26:56.306 } 00:26:56.306 ] 00:26:56.306 }' 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:56.306 17:14:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.719 17:14:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.719 17:14:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.719 17:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:57.719 "name": "raid_bdev1", 00:26:57.719 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:57.719 "strip_size_kb": 64, 00:26:57.719 "state": "online", 00:26:57.719 "raid_level": "raid5f", 00:26:57.719 "superblock": true, 00:26:57.719 "num_base_bdevs": 4, 00:26:57.719 "num_base_bdevs_discovered": 4, 00:26:57.719 "num_base_bdevs_operational": 4, 00:26:57.719 "process": { 00:26:57.719 "type": "rebuild", 00:26:57.719 "target": "spare", 00:26:57.719 "progress": { 00:26:57.719 "blocks": 82560, 00:26:57.719 "percent": 43 00:26:57.719 } 00:26:57.719 }, 00:26:57.719 "base_bdevs_list": [ 00:26:57.719 { 00:26:57.719 "name": "spare", 00:26:57.719 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:57.719 "is_configured": true, 00:26:57.719 "data_offset": 2048, 00:26:57.719 "data_size": 63488 00:26:57.719 }, 00:26:57.719 { 00:26:57.719 "name": "BaseBdev2", 00:26:57.719 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:57.719 "is_configured": true, 00:26:57.719 "data_offset": 2048, 00:26:57.719 "data_size": 63488 00:26:57.719 }, 00:26:57.719 { 00:26:57.719 "name": "BaseBdev3", 00:26:57.719 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:57.719 "is_configured": true, 00:26:57.719 "data_offset": 2048, 00:26:57.719 "data_size": 63488 00:26:57.719 }, 00:26:57.719 { 00:26:57.719 "name": "BaseBdev4", 00:26:57.719 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:57.719 "is_configured": true, 00:26:57.720 "data_offset": 2048, 00:26:57.720 "data_size": 63488 00:26:57.720 } 00:26:57.720 ] 00:26:57.720 }' 00:26:57.720 17:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:57.720 17:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:57.720 17:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:57.720 17:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:57.720 17:14:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:58.687 "name": "raid_bdev1", 00:26:58.687 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:58.687 "strip_size_kb": 64, 00:26:58.687 "state": "online", 00:26:58.687 "raid_level": "raid5f", 00:26:58.687 "superblock": true, 00:26:58.687 "num_base_bdevs": 4, 00:26:58.687 "num_base_bdevs_discovered": 4, 00:26:58.687 "num_base_bdevs_operational": 4, 00:26:58.687 "process": { 00:26:58.687 "type": "rebuild", 00:26:58.687 "target": "spare", 00:26:58.687 "progress": { 00:26:58.687 "blocks": 103680, 00:26:58.687 "percent": 54 00:26:58.687 } 00:26:58.687 }, 00:26:58.687 "base_bdevs_list": [ 00:26:58.687 { 00:26:58.687 "name": "spare", 00:26:58.687 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:58.687 "is_configured": true, 00:26:58.687 "data_offset": 2048, 00:26:58.687 "data_size": 63488 00:26:58.687 }, 00:26:58.687 { 00:26:58.687 "name": "BaseBdev2", 00:26:58.687 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:58.687 "is_configured": true, 00:26:58.687 "data_offset": 2048, 00:26:58.687 "data_size": 63488 00:26:58.687 }, 00:26:58.687 { 00:26:58.687 "name": "BaseBdev3", 00:26:58.687 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:58.687 "is_configured": true, 00:26:58.687 "data_offset": 2048, 00:26:58.687 "data_size": 63488 00:26:58.687 }, 00:26:58.687 { 00:26:58.687 "name": "BaseBdev4", 00:26:58.687 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:58.687 "is_configured": true, 00:26:58.687 "data_offset": 2048, 00:26:58.687 "data_size": 63488 00:26:58.687 } 00:26:58.687 ] 00:26:58.687 }' 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.687 17:14:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:59.620 "name": "raid_bdev1", 00:26:59.620 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:26:59.620 "strip_size_kb": 64, 00:26:59.620 "state": "online", 00:26:59.620 "raid_level": "raid5f", 00:26:59.620 "superblock": true, 00:26:59.620 "num_base_bdevs": 4, 00:26:59.620 "num_base_bdevs_discovered": 4, 00:26:59.620 "num_base_bdevs_operational": 4, 00:26:59.620 "process": { 00:26:59.620 "type": "rebuild", 00:26:59.620 "target": "spare", 00:26:59.620 "progress": { 00:26:59.620 "blocks": 124800, 00:26:59.620 "percent": 65 00:26:59.620 } 00:26:59.620 }, 00:26:59.620 "base_bdevs_list": [ 00:26:59.620 { 00:26:59.620 "name": "spare", 00:26:59.620 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:26:59.620 "is_configured": true, 00:26:59.620 "data_offset": 2048, 00:26:59.620 "data_size": 63488 00:26:59.620 }, 00:26:59.620 { 00:26:59.620 "name": "BaseBdev2", 00:26:59.620 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:26:59.620 "is_configured": true, 00:26:59.620 "data_offset": 2048, 00:26:59.620 "data_size": 63488 00:26:59.620 }, 00:26:59.620 { 00:26:59.620 "name": "BaseBdev3", 00:26:59.620 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:26:59.620 "is_configured": true, 00:26:59.620 "data_offset": 2048, 00:26:59.620 "data_size": 63488 00:26:59.620 }, 00:26:59.620 { 00:26:59.620 "name": "BaseBdev4", 00:26:59.620 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:26:59.620 "is_configured": true, 00:26:59.620 "data_offset": 2048, 00:26:59.620 "data_size": 63488 00:26:59.620 } 00:26:59.620 ] 00:26:59.620 }' 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:59.620 17:14:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.989 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:00.989 "name": "raid_bdev1", 00:27:00.989 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:00.989 "strip_size_kb": 64, 00:27:00.989 "state": "online", 00:27:00.989 "raid_level": "raid5f", 00:27:00.989 "superblock": true, 00:27:00.989 "num_base_bdevs": 4, 00:27:00.989 "num_base_bdevs_discovered": 4, 00:27:00.989 "num_base_bdevs_operational": 4, 00:27:00.989 "process": { 00:27:00.989 "type": "rebuild", 00:27:00.989 "target": "spare", 00:27:00.989 "progress": { 00:27:00.989 "blocks": 145920, 00:27:00.989 "percent": 76 00:27:00.989 } 00:27:00.989 }, 00:27:00.989 "base_bdevs_list": [ 00:27:00.989 { 00:27:00.989 "name": "spare", 00:27:00.989 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:00.989 "is_configured": true, 00:27:00.989 "data_offset": 2048, 00:27:00.990 "data_size": 63488 00:27:00.990 }, 00:27:00.990 { 00:27:00.990 "name": "BaseBdev2", 00:27:00.990 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:00.990 "is_configured": true, 00:27:00.990 "data_offset": 2048, 00:27:00.990 "data_size": 63488 00:27:00.990 }, 00:27:00.990 { 00:27:00.990 "name": "BaseBdev3", 00:27:00.990 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:00.990 "is_configured": true, 00:27:00.990 "data_offset": 2048, 00:27:00.990 "data_size": 63488 00:27:00.990 }, 00:27:00.990 { 00:27:00.990 "name": "BaseBdev4", 00:27:00.990 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:00.990 "is_configured": true, 00:27:00.990 "data_offset": 2048, 00:27:00.990 "data_size": 63488 00:27:00.990 } 00:27:00.990 ] 00:27:00.990 }' 00:27:00.990 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:00.990 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:00.990 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:00.990 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:00.990 17:14:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:01.925 "name": "raid_bdev1", 00:27:01.925 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:01.925 "strip_size_kb": 64, 00:27:01.925 "state": "online", 00:27:01.925 "raid_level": "raid5f", 00:27:01.925 "superblock": true, 00:27:01.925 "num_base_bdevs": 4, 00:27:01.925 "num_base_bdevs_discovered": 4, 00:27:01.925 "num_base_bdevs_operational": 4, 00:27:01.925 "process": { 00:27:01.925 "type": "rebuild", 00:27:01.925 "target": "spare", 00:27:01.925 "progress": { 00:27:01.925 "blocks": 167040, 00:27:01.925 "percent": 87 00:27:01.925 } 00:27:01.925 }, 00:27:01.925 "base_bdevs_list": [ 00:27:01.925 { 00:27:01.925 "name": "spare", 00:27:01.925 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:01.925 "is_configured": true, 00:27:01.925 "data_offset": 2048, 00:27:01.925 "data_size": 63488 00:27:01.925 }, 00:27:01.925 { 00:27:01.925 "name": "BaseBdev2", 00:27:01.925 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:01.925 "is_configured": true, 00:27:01.925 "data_offset": 2048, 00:27:01.925 "data_size": 63488 00:27:01.925 }, 00:27:01.925 { 00:27:01.925 "name": "BaseBdev3", 00:27:01.925 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:01.925 "is_configured": true, 00:27:01.925 "data_offset": 2048, 00:27:01.925 "data_size": 63488 00:27:01.925 }, 00:27:01.925 { 00:27:01.925 "name": "BaseBdev4", 00:27:01.925 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:01.925 "is_configured": true, 00:27:01.925 "data_offset": 2048, 00:27:01.925 "data_size": 63488 00:27:01.925 } 00:27:01.925 ] 00:27:01.925 }' 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:01.925 17:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:02.859 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:02.859 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:02.860 "name": "raid_bdev1", 00:27:02.860 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:02.860 "strip_size_kb": 64, 00:27:02.860 "state": "online", 00:27:02.860 "raid_level": "raid5f", 00:27:02.860 "superblock": true, 00:27:02.860 "num_base_bdevs": 4, 00:27:02.860 "num_base_bdevs_discovered": 4, 00:27:02.860 "num_base_bdevs_operational": 4, 00:27:02.860 "process": { 00:27:02.860 "type": "rebuild", 00:27:02.860 "target": "spare", 00:27:02.860 "progress": { 00:27:02.860 "blocks": 188160, 00:27:02.860 "percent": 98 00:27:02.860 } 00:27:02.860 }, 00:27:02.860 "base_bdevs_list": [ 00:27:02.860 { 00:27:02.860 "name": "spare", 00:27:02.860 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:02.860 "is_configured": true, 00:27:02.860 "data_offset": 2048, 00:27:02.860 "data_size": 63488 00:27:02.860 }, 00:27:02.860 { 00:27:02.860 "name": "BaseBdev2", 00:27:02.860 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:02.860 "is_configured": true, 00:27:02.860 "data_offset": 2048, 00:27:02.860 "data_size": 63488 00:27:02.860 }, 00:27:02.860 { 00:27:02.860 "name": "BaseBdev3", 00:27:02.860 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:02.860 "is_configured": true, 00:27:02.860 "data_offset": 2048, 00:27:02.860 "data_size": 63488 00:27:02.860 }, 00:27:02.860 { 00:27:02.860 "name": "BaseBdev4", 00:27:02.860 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:02.860 "is_configured": true, 00:27:02.860 "data_offset": 2048, 00:27:02.860 "data_size": 63488 00:27:02.860 } 00:27:02.860 ] 00:27:02.860 }' 00:27:02.860 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:03.117 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:03.117 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:03.117 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:03.117 17:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:03.117 [2024-11-08 17:14:39.675495] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:03.117 [2024-11-08 17:14:39.675577] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:03.117 [2024-11-08 17:14:39.675737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.050 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:04.051 "name": "raid_bdev1", 00:27:04.051 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:04.051 "strip_size_kb": 64, 00:27:04.051 "state": "online", 00:27:04.051 "raid_level": "raid5f", 00:27:04.051 "superblock": true, 00:27:04.051 "num_base_bdevs": 4, 00:27:04.051 "num_base_bdevs_discovered": 4, 00:27:04.051 "num_base_bdevs_operational": 4, 00:27:04.051 "base_bdevs_list": [ 00:27:04.051 { 00:27:04.051 "name": "spare", 00:27:04.051 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:04.051 "is_configured": true, 00:27:04.051 "data_offset": 2048, 00:27:04.051 "data_size": 63488 00:27:04.051 }, 00:27:04.051 { 00:27:04.051 "name": "BaseBdev2", 00:27:04.051 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:04.051 "is_configured": true, 00:27:04.051 "data_offset": 2048, 00:27:04.051 "data_size": 63488 00:27:04.051 }, 00:27:04.051 { 00:27:04.051 "name": "BaseBdev3", 00:27:04.051 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:04.051 "is_configured": true, 00:27:04.051 "data_offset": 2048, 00:27:04.051 "data_size": 63488 00:27:04.051 }, 00:27:04.051 { 00:27:04.051 "name": "BaseBdev4", 00:27:04.051 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:04.051 "is_configured": true, 00:27:04.051 "data_offset": 2048, 00:27:04.051 "data_size": 63488 00:27:04.051 } 00:27:04.051 ] 00:27:04.051 }' 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.051 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:04.308 "name": "raid_bdev1", 00:27:04.308 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:04.308 "strip_size_kb": 64, 00:27:04.308 "state": "online", 00:27:04.308 "raid_level": "raid5f", 00:27:04.308 "superblock": true, 00:27:04.308 "num_base_bdevs": 4, 00:27:04.308 "num_base_bdevs_discovered": 4, 00:27:04.308 "num_base_bdevs_operational": 4, 00:27:04.308 "base_bdevs_list": [ 00:27:04.308 { 00:27:04.308 "name": "spare", 00:27:04.308 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 }, 00:27:04.308 { 00:27:04.308 "name": "BaseBdev2", 00:27:04.308 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 }, 00:27:04.308 { 00:27:04.308 "name": "BaseBdev3", 00:27:04.308 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 }, 00:27:04.308 { 00:27:04.308 "name": "BaseBdev4", 00:27:04.308 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 } 00:27:04.308 ] 00:27:04.308 }' 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.308 "name": "raid_bdev1", 00:27:04.308 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:04.308 "strip_size_kb": 64, 00:27:04.308 "state": "online", 00:27:04.308 "raid_level": "raid5f", 00:27:04.308 "superblock": true, 00:27:04.308 "num_base_bdevs": 4, 00:27:04.308 "num_base_bdevs_discovered": 4, 00:27:04.308 "num_base_bdevs_operational": 4, 00:27:04.308 "base_bdevs_list": [ 00:27:04.308 { 00:27:04.308 "name": "spare", 00:27:04.308 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 }, 00:27:04.308 { 00:27:04.308 "name": "BaseBdev2", 00:27:04.308 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 }, 00:27:04.308 { 00:27:04.308 "name": "BaseBdev3", 00:27:04.308 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 }, 00:27:04.308 { 00:27:04.308 "name": "BaseBdev4", 00:27:04.308 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:04.308 "is_configured": true, 00:27:04.308 "data_offset": 2048, 00:27:04.308 "data_size": 63488 00:27:04.308 } 00:27:04.308 ] 00:27:04.308 }' 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.308 17:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.566 [2024-11-08 17:14:41.176293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:04.566 [2024-11-08 17:14:41.176331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:04.566 [2024-11-08 17:14:41.176427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.566 [2024-11-08 17:14:41.176535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:04.566 [2024-11-08 17:14:41.176547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:04.566 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:04.824 /dev/nbd0 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:04.824 1+0 records in 00:27:04.824 1+0 records out 00:27:04.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293049 s, 14.0 MB/s 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:04.824 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:05.082 /dev/nbd1 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:05.082 1+0 records in 00:27:05.082 1+0 records out 00:27:05.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460898 s, 8.9 MB/s 00:27:05.082 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.083 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:27:05.083 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.083 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:05.083 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:27:05.083 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:05.083 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:05.083 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:05.341 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:05.341 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:05.341 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:05.341 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:05.341 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:05.341 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.341 17:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.599 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.599 [2024-11-08 17:14:42.308635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:05.599 [2024-11-08 17:14:42.308695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.599 [2024-11-08 17:14:42.308718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:27:05.599 [2024-11-08 17:14:42.308727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.599 [2024-11-08 17:14:42.310782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.599 [2024-11-08 17:14:42.310814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:05.599 [2024-11-08 17:14:42.310904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:05.599 [2024-11-08 17:14:42.310951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:05.599 [2024-11-08 17:14:42.311074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:05.599 [2024-11-08 17:14:42.311155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:05.599 [2024-11-08 17:14:42.311213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:05.857 spare 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.857 [2024-11-08 17:14:42.411306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:05.857 [2024-11-08 17:14:42.411378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:05.857 [2024-11-08 17:14:42.411721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:27:05.857 [2024-11-08 17:14:42.415511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:05.857 [2024-11-08 17:14:42.415532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:05.857 [2024-11-08 17:14:42.415728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.857 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:05.857 "name": "raid_bdev1", 00:27:05.857 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:05.857 "strip_size_kb": 64, 00:27:05.857 "state": "online", 00:27:05.857 "raid_level": "raid5f", 00:27:05.857 "superblock": true, 00:27:05.857 "num_base_bdevs": 4, 00:27:05.857 "num_base_bdevs_discovered": 4, 00:27:05.857 "num_base_bdevs_operational": 4, 00:27:05.857 "base_bdevs_list": [ 00:27:05.857 { 00:27:05.858 "name": "spare", 00:27:05.858 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:05.858 "is_configured": true, 00:27:05.858 "data_offset": 2048, 00:27:05.858 "data_size": 63488 00:27:05.858 }, 00:27:05.858 { 00:27:05.858 "name": "BaseBdev2", 00:27:05.858 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:05.858 "is_configured": true, 00:27:05.858 "data_offset": 2048, 00:27:05.858 "data_size": 63488 00:27:05.858 }, 00:27:05.858 { 00:27:05.858 "name": "BaseBdev3", 00:27:05.858 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:05.858 "is_configured": true, 00:27:05.858 "data_offset": 2048, 00:27:05.858 "data_size": 63488 00:27:05.858 }, 00:27:05.858 { 00:27:05.858 "name": "BaseBdev4", 00:27:05.858 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:05.858 "is_configured": true, 00:27:05.858 "data_offset": 2048, 00:27:05.858 "data_size": 63488 00:27:05.858 } 00:27:05.858 ] 00:27:05.858 }' 00:27:05.858 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:05.858 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:06.117 "name": "raid_bdev1", 00:27:06.117 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:06.117 "strip_size_kb": 64, 00:27:06.117 "state": "online", 00:27:06.117 "raid_level": "raid5f", 00:27:06.117 "superblock": true, 00:27:06.117 "num_base_bdevs": 4, 00:27:06.117 "num_base_bdevs_discovered": 4, 00:27:06.117 "num_base_bdevs_operational": 4, 00:27:06.117 "base_bdevs_list": [ 00:27:06.117 { 00:27:06.117 "name": "spare", 00:27:06.117 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:06.117 "is_configured": true, 00:27:06.117 "data_offset": 2048, 00:27:06.117 "data_size": 63488 00:27:06.117 }, 00:27:06.117 { 00:27:06.117 "name": "BaseBdev2", 00:27:06.117 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:06.117 "is_configured": true, 00:27:06.117 "data_offset": 2048, 00:27:06.117 "data_size": 63488 00:27:06.117 }, 00:27:06.117 { 00:27:06.117 "name": "BaseBdev3", 00:27:06.117 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:06.117 "is_configured": true, 00:27:06.117 "data_offset": 2048, 00:27:06.117 "data_size": 63488 00:27:06.117 }, 00:27:06.117 { 00:27:06.117 "name": "BaseBdev4", 00:27:06.117 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:06.117 "is_configured": true, 00:27:06.117 "data_offset": 2048, 00:27:06.117 "data_size": 63488 00:27:06.117 } 00:27:06.117 ] 00:27:06.117 }' 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.117 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.375 [2024-11-08 17:14:42.860366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.375 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:06.375 "name": "raid_bdev1", 00:27:06.375 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:06.375 "strip_size_kb": 64, 00:27:06.375 "state": "online", 00:27:06.375 "raid_level": "raid5f", 00:27:06.375 "superblock": true, 00:27:06.375 "num_base_bdevs": 4, 00:27:06.375 "num_base_bdevs_discovered": 3, 00:27:06.375 "num_base_bdevs_operational": 3, 00:27:06.375 "base_bdevs_list": [ 00:27:06.375 { 00:27:06.375 "name": null, 00:27:06.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.375 "is_configured": false, 00:27:06.375 "data_offset": 0, 00:27:06.375 "data_size": 63488 00:27:06.375 }, 00:27:06.375 { 00:27:06.375 "name": "BaseBdev2", 00:27:06.375 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:06.375 "is_configured": true, 00:27:06.375 "data_offset": 2048, 00:27:06.375 "data_size": 63488 00:27:06.375 }, 00:27:06.375 { 00:27:06.375 "name": "BaseBdev3", 00:27:06.375 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:06.375 "is_configured": true, 00:27:06.376 "data_offset": 2048, 00:27:06.376 "data_size": 63488 00:27:06.376 }, 00:27:06.376 { 00:27:06.376 "name": "BaseBdev4", 00:27:06.376 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:06.376 "is_configured": true, 00:27:06.376 "data_offset": 2048, 00:27:06.376 "data_size": 63488 00:27:06.376 } 00:27:06.376 ] 00:27:06.376 }' 00:27:06.376 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:06.376 17:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.632 17:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:06.633 17:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.633 17:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.633 [2024-11-08 17:14:43.164438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:06.633 [2024-11-08 17:14:43.164629] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:06.633 [2024-11-08 17:14:43.164650] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:06.633 [2024-11-08 17:14:43.164681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:06.633 [2024-11-08 17:14:43.172609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:27:06.633 17:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.633 17:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:06.633 [2024-11-08 17:14:43.178203] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:07.566 "name": "raid_bdev1", 00:27:07.566 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:07.566 "strip_size_kb": 64, 00:27:07.566 "state": "online", 00:27:07.566 "raid_level": "raid5f", 00:27:07.566 "superblock": true, 00:27:07.566 "num_base_bdevs": 4, 00:27:07.566 "num_base_bdevs_discovered": 4, 00:27:07.566 "num_base_bdevs_operational": 4, 00:27:07.566 "process": { 00:27:07.566 "type": "rebuild", 00:27:07.566 "target": "spare", 00:27:07.566 "progress": { 00:27:07.566 "blocks": 17280, 00:27:07.566 "percent": 9 00:27:07.566 } 00:27:07.566 }, 00:27:07.566 "base_bdevs_list": [ 00:27:07.566 { 00:27:07.566 "name": "spare", 00:27:07.566 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:07.566 "is_configured": true, 00:27:07.566 "data_offset": 2048, 00:27:07.566 "data_size": 63488 00:27:07.566 }, 00:27:07.566 { 00:27:07.566 "name": "BaseBdev2", 00:27:07.566 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:07.566 "is_configured": true, 00:27:07.566 "data_offset": 2048, 00:27:07.566 "data_size": 63488 00:27:07.566 }, 00:27:07.566 { 00:27:07.566 "name": "BaseBdev3", 00:27:07.566 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:07.566 "is_configured": true, 00:27:07.566 "data_offset": 2048, 00:27:07.566 "data_size": 63488 00:27:07.566 }, 00:27:07.566 { 00:27:07.566 "name": "BaseBdev4", 00:27:07.566 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:07.566 "is_configured": true, 00:27:07.566 "data_offset": 2048, 00:27:07.566 "data_size": 63488 00:27:07.566 } 00:27:07.566 ] 00:27:07.566 }' 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:07.566 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.825 [2024-11-08 17:14:44.288152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:07.825 [2024-11-08 17:14:44.388267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:07.825 [2024-11-08 17:14:44.388339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.825 [2024-11-08 17:14:44.388356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:07.825 [2024-11-08 17:14:44.388364] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.825 "name": "raid_bdev1", 00:27:07.825 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:07.825 "strip_size_kb": 64, 00:27:07.825 "state": "online", 00:27:07.825 "raid_level": "raid5f", 00:27:07.825 "superblock": true, 00:27:07.825 "num_base_bdevs": 4, 00:27:07.825 "num_base_bdevs_discovered": 3, 00:27:07.825 "num_base_bdevs_operational": 3, 00:27:07.825 "base_bdevs_list": [ 00:27:07.825 { 00:27:07.825 "name": null, 00:27:07.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.825 "is_configured": false, 00:27:07.825 "data_offset": 0, 00:27:07.825 "data_size": 63488 00:27:07.825 }, 00:27:07.825 { 00:27:07.825 "name": "BaseBdev2", 00:27:07.825 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:07.825 "is_configured": true, 00:27:07.825 "data_offset": 2048, 00:27:07.825 "data_size": 63488 00:27:07.825 }, 00:27:07.825 { 00:27:07.825 "name": "BaseBdev3", 00:27:07.825 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:07.825 "is_configured": true, 00:27:07.825 "data_offset": 2048, 00:27:07.825 "data_size": 63488 00:27:07.825 }, 00:27:07.825 { 00:27:07.825 "name": "BaseBdev4", 00:27:07.825 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:07.825 "is_configured": true, 00:27:07.825 "data_offset": 2048, 00:27:07.825 "data_size": 63488 00:27:07.825 } 00:27:07.825 ] 00:27:07.825 }' 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.825 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.083 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:08.083 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.083 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.083 [2024-11-08 17:14:44.729837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:08.083 [2024-11-08 17:14:44.729906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.083 [2024-11-08 17:14:44.729936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:27:08.083 [2024-11-08 17:14:44.729947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.083 [2024-11-08 17:14:44.730405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.083 [2024-11-08 17:14:44.730430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:08.083 [2024-11-08 17:14:44.730523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:08.083 [2024-11-08 17:14:44.730537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:08.083 [2024-11-08 17:14:44.730546] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:08.083 [2024-11-08 17:14:44.730565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:08.083 [2024-11-08 17:14:44.738374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:27:08.083 spare 00:27:08.083 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.083 17:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:08.083 [2024-11-08 17:14:44.743928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:09.456 "name": "raid_bdev1", 00:27:09.456 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:09.456 "strip_size_kb": 64, 00:27:09.456 "state": "online", 00:27:09.456 "raid_level": "raid5f", 00:27:09.456 "superblock": true, 00:27:09.456 "num_base_bdevs": 4, 00:27:09.456 "num_base_bdevs_discovered": 4, 00:27:09.456 "num_base_bdevs_operational": 4, 00:27:09.456 "process": { 00:27:09.456 "type": "rebuild", 00:27:09.456 "target": "spare", 00:27:09.456 "progress": { 00:27:09.456 "blocks": 19200, 00:27:09.456 "percent": 10 00:27:09.456 } 00:27:09.456 }, 00:27:09.456 "base_bdevs_list": [ 00:27:09.456 { 00:27:09.456 "name": "spare", 00:27:09.456 "uuid": "c0882346-d749-5611-8381-5ae3b8baba75", 00:27:09.456 "is_configured": true, 00:27:09.456 "data_offset": 2048, 00:27:09.456 "data_size": 63488 00:27:09.456 }, 00:27:09.456 { 00:27:09.456 "name": "BaseBdev2", 00:27:09.456 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:09.456 "is_configured": true, 00:27:09.456 "data_offset": 2048, 00:27:09.456 "data_size": 63488 00:27:09.456 }, 00:27:09.456 { 00:27:09.456 "name": "BaseBdev3", 00:27:09.456 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:09.456 "is_configured": true, 00:27:09.456 "data_offset": 2048, 00:27:09.456 "data_size": 63488 00:27:09.456 }, 00:27:09.456 { 00:27:09.456 "name": "BaseBdev4", 00:27:09.456 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:09.456 "is_configured": true, 00:27:09.456 "data_offset": 2048, 00:27:09.456 "data_size": 63488 00:27:09.456 } 00:27:09.456 ] 00:27:09.456 }' 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.456 [2024-11-08 17:14:45.861085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:09.456 [2024-11-08 17:14:45.953193] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:09.456 [2024-11-08 17:14:45.953259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.456 [2024-11-08 17:14:45.953277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:09.456 [2024-11-08 17:14:45.953284] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.456 17:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.456 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:09.456 "name": "raid_bdev1", 00:27:09.456 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:09.456 "strip_size_kb": 64, 00:27:09.456 "state": "online", 00:27:09.456 "raid_level": "raid5f", 00:27:09.456 "superblock": true, 00:27:09.456 "num_base_bdevs": 4, 00:27:09.456 "num_base_bdevs_discovered": 3, 00:27:09.456 "num_base_bdevs_operational": 3, 00:27:09.456 "base_bdevs_list": [ 00:27:09.456 { 00:27:09.456 "name": null, 00:27:09.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.456 "is_configured": false, 00:27:09.456 "data_offset": 0, 00:27:09.456 "data_size": 63488 00:27:09.456 }, 00:27:09.456 { 00:27:09.456 "name": "BaseBdev2", 00:27:09.456 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:09.456 "is_configured": true, 00:27:09.456 "data_offset": 2048, 00:27:09.456 "data_size": 63488 00:27:09.456 }, 00:27:09.456 { 00:27:09.456 "name": "BaseBdev3", 00:27:09.456 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:09.456 "is_configured": true, 00:27:09.456 "data_offset": 2048, 00:27:09.456 "data_size": 63488 00:27:09.456 }, 00:27:09.456 { 00:27:09.456 "name": "BaseBdev4", 00:27:09.456 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:09.456 "is_configured": true, 00:27:09.456 "data_offset": 2048, 00:27:09.456 "data_size": 63488 00:27:09.456 } 00:27:09.456 ] 00:27:09.456 }' 00:27:09.456 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:09.456 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:09.715 "name": "raid_bdev1", 00:27:09.715 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:09.715 "strip_size_kb": 64, 00:27:09.715 "state": "online", 00:27:09.715 "raid_level": "raid5f", 00:27:09.715 "superblock": true, 00:27:09.715 "num_base_bdevs": 4, 00:27:09.715 "num_base_bdevs_discovered": 3, 00:27:09.715 "num_base_bdevs_operational": 3, 00:27:09.715 "base_bdevs_list": [ 00:27:09.715 { 00:27:09.715 "name": null, 00:27:09.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.715 "is_configured": false, 00:27:09.715 "data_offset": 0, 00:27:09.715 "data_size": 63488 00:27:09.715 }, 00:27:09.715 { 00:27:09.715 "name": "BaseBdev2", 00:27:09.715 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:09.715 "is_configured": true, 00:27:09.715 "data_offset": 2048, 00:27:09.715 "data_size": 63488 00:27:09.715 }, 00:27:09.715 { 00:27:09.715 "name": "BaseBdev3", 00:27:09.715 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:09.715 "is_configured": true, 00:27:09.715 "data_offset": 2048, 00:27:09.715 "data_size": 63488 00:27:09.715 }, 00:27:09.715 { 00:27:09.715 "name": "BaseBdev4", 00:27:09.715 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:09.715 "is_configured": true, 00:27:09.715 "data_offset": 2048, 00:27:09.715 "data_size": 63488 00:27:09.715 } 00:27:09.715 ] 00:27:09.715 }' 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.715 [2024-11-08 17:14:46.418213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:09.715 [2024-11-08 17:14:46.418267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.715 [2024-11-08 17:14:46.418287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:27:09.715 [2024-11-08 17:14:46.418296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.715 [2024-11-08 17:14:46.418742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.715 [2024-11-08 17:14:46.418769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:09.715 [2024-11-08 17:14:46.418840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:09.715 [2024-11-08 17:14:46.418853] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:09.715 [2024-11-08 17:14:46.418861] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:09.715 [2024-11-08 17:14:46.418870] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:09.715 BaseBdev1 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.715 17:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:11.140 "name": "raid_bdev1", 00:27:11.140 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:11.140 "strip_size_kb": 64, 00:27:11.140 "state": "online", 00:27:11.140 "raid_level": "raid5f", 00:27:11.140 "superblock": true, 00:27:11.140 "num_base_bdevs": 4, 00:27:11.140 "num_base_bdevs_discovered": 3, 00:27:11.140 "num_base_bdevs_operational": 3, 00:27:11.140 "base_bdevs_list": [ 00:27:11.140 { 00:27:11.140 "name": null, 00:27:11.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.140 "is_configured": false, 00:27:11.140 "data_offset": 0, 00:27:11.140 "data_size": 63488 00:27:11.140 }, 00:27:11.140 { 00:27:11.140 "name": "BaseBdev2", 00:27:11.140 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:11.140 "is_configured": true, 00:27:11.140 "data_offset": 2048, 00:27:11.140 "data_size": 63488 00:27:11.140 }, 00:27:11.140 { 00:27:11.140 "name": "BaseBdev3", 00:27:11.140 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:11.140 "is_configured": true, 00:27:11.140 "data_offset": 2048, 00:27:11.140 "data_size": 63488 00:27:11.140 }, 00:27:11.140 { 00:27:11.140 "name": "BaseBdev4", 00:27:11.140 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:11.140 "is_configured": true, 00:27:11.140 "data_offset": 2048, 00:27:11.140 "data_size": 63488 00:27:11.140 } 00:27:11.140 ] 00:27:11.140 }' 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.140 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:11.140 "name": "raid_bdev1", 00:27:11.140 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:11.140 "strip_size_kb": 64, 00:27:11.140 "state": "online", 00:27:11.140 "raid_level": "raid5f", 00:27:11.140 "superblock": true, 00:27:11.140 "num_base_bdevs": 4, 00:27:11.140 "num_base_bdevs_discovered": 3, 00:27:11.140 "num_base_bdevs_operational": 3, 00:27:11.140 "base_bdevs_list": [ 00:27:11.140 { 00:27:11.140 "name": null, 00:27:11.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.140 "is_configured": false, 00:27:11.140 "data_offset": 0, 00:27:11.140 "data_size": 63488 00:27:11.140 }, 00:27:11.140 { 00:27:11.140 "name": "BaseBdev2", 00:27:11.140 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:11.140 "is_configured": true, 00:27:11.140 "data_offset": 2048, 00:27:11.141 "data_size": 63488 00:27:11.141 }, 00:27:11.141 { 00:27:11.141 "name": "BaseBdev3", 00:27:11.141 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:11.141 "is_configured": true, 00:27:11.141 "data_offset": 2048, 00:27:11.141 "data_size": 63488 00:27:11.141 }, 00:27:11.141 { 00:27:11.141 "name": "BaseBdev4", 00:27:11.141 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:11.141 "is_configured": true, 00:27:11.141 "data_offset": 2048, 00:27:11.141 "data_size": 63488 00:27:11.141 } 00:27:11.141 ] 00:27:11.141 }' 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.141 [2024-11-08 17:14:47.846531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:11.141 [2024-11-08 17:14:47.846685] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:11.141 [2024-11-08 17:14:47.846697] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:11.141 request: 00:27:11.141 { 00:27:11.141 "base_bdev": "BaseBdev1", 00:27:11.141 "raid_bdev": "raid_bdev1", 00:27:11.141 "method": "bdev_raid_add_base_bdev", 00:27:11.141 "req_id": 1 00:27:11.141 } 00:27:11.141 Got JSON-RPC error response 00:27:11.141 response: 00:27:11.141 { 00:27:11.141 "code": -22, 00:27:11.141 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:11.141 } 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:11.141 17:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.516 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.516 "name": "raid_bdev1", 00:27:12.516 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:12.516 "strip_size_kb": 64, 00:27:12.516 "state": "online", 00:27:12.516 "raid_level": "raid5f", 00:27:12.516 "superblock": true, 00:27:12.516 "num_base_bdevs": 4, 00:27:12.516 "num_base_bdevs_discovered": 3, 00:27:12.516 "num_base_bdevs_operational": 3, 00:27:12.516 "base_bdevs_list": [ 00:27:12.516 { 00:27:12.516 "name": null, 00:27:12.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.516 "is_configured": false, 00:27:12.516 "data_offset": 0, 00:27:12.516 "data_size": 63488 00:27:12.516 }, 00:27:12.516 { 00:27:12.516 "name": "BaseBdev2", 00:27:12.516 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:12.516 "is_configured": true, 00:27:12.516 "data_offset": 2048, 00:27:12.516 "data_size": 63488 00:27:12.516 }, 00:27:12.516 { 00:27:12.517 "name": "BaseBdev3", 00:27:12.517 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:12.517 "is_configured": true, 00:27:12.517 "data_offset": 2048, 00:27:12.517 "data_size": 63488 00:27:12.517 }, 00:27:12.517 { 00:27:12.517 "name": "BaseBdev4", 00:27:12.517 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:12.517 "is_configured": true, 00:27:12.517 "data_offset": 2048, 00:27:12.517 "data_size": 63488 00:27:12.517 } 00:27:12.517 ] 00:27:12.517 }' 00:27:12.517 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.517 17:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:12.517 "name": "raid_bdev1", 00:27:12.517 "uuid": "3d626dc9-2123-475d-a803-9f2ed23f55ee", 00:27:12.517 "strip_size_kb": 64, 00:27:12.517 "state": "online", 00:27:12.517 "raid_level": "raid5f", 00:27:12.517 "superblock": true, 00:27:12.517 "num_base_bdevs": 4, 00:27:12.517 "num_base_bdevs_discovered": 3, 00:27:12.517 "num_base_bdevs_operational": 3, 00:27:12.517 "base_bdevs_list": [ 00:27:12.517 { 00:27:12.517 "name": null, 00:27:12.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.517 "is_configured": false, 00:27:12.517 "data_offset": 0, 00:27:12.517 "data_size": 63488 00:27:12.517 }, 00:27:12.517 { 00:27:12.517 "name": "BaseBdev2", 00:27:12.517 "uuid": "e933122f-b5de-516b-8cb4-8ccec58e7692", 00:27:12.517 "is_configured": true, 00:27:12.517 "data_offset": 2048, 00:27:12.517 "data_size": 63488 00:27:12.517 }, 00:27:12.517 { 00:27:12.517 "name": "BaseBdev3", 00:27:12.517 "uuid": "c98339ea-79a9-52a5-9294-3139c951f7e9", 00:27:12.517 "is_configured": true, 00:27:12.517 "data_offset": 2048, 00:27:12.517 "data_size": 63488 00:27:12.517 }, 00:27:12.517 { 00:27:12.517 "name": "BaseBdev4", 00:27:12.517 "uuid": "38e74267-49e9-5c0b-9539-35fb93b16715", 00:27:12.517 "is_configured": true, 00:27:12.517 "data_offset": 2048, 00:27:12.517 "data_size": 63488 00:27:12.517 } 00:27:12.517 ] 00:27:12.517 }' 00:27:12.517 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 83381 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83381 ']' 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 83381 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83381 00:27:12.775 killing process with pid 83381 00:27:12.775 Received shutdown signal, test time was about 60.000000 seconds 00:27:12.775 00:27:12.775 Latency(us) 00:27:12.775 [2024-11-08T17:14:49.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.775 [2024-11-08T17:14:49.490Z] =================================================================================================================== 00:27:12.775 [2024-11-08T17:14:49.490Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83381' 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 83381 00:27:12.775 [2024-11-08 17:14:49.291558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:12.775 17:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 83381 00:27:12.775 [2024-11-08 17:14:49.291677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.775 [2024-11-08 17:14:49.291747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:12.775 [2024-11-08 17:14:49.291768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:13.033 [2024-11-08 17:14:49.536794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:13.600 ************************************ 00:27:13.600 END TEST raid5f_rebuild_test_sb 00:27:13.600 ************************************ 00:27:13.600 17:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:27:13.600 00:27:13.600 real 0m25.045s 00:27:13.600 user 0m30.290s 00:27:13.600 sys 0m2.304s 00:27:13.600 17:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:13.600 17:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.600 17:14:50 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:27:13.600 17:14:50 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:27:13.600 17:14:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:13.600 17:14:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:13.600 17:14:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:13.600 ************************************ 00:27:13.600 START TEST raid_state_function_test_sb_4k 00:27:13.600 ************************************ 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:13.600 Process raid pid: 84182 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=84182 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84182' 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 84182 00:27:13.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 84182 ']' 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:13.600 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:13.601 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.601 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:13.601 17:14:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:13.601 [2024-11-08 17:14:50.233551] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:27:13.601 [2024-11-08 17:14:50.233657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.859 [2024-11-08 17:14:50.385881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.859 [2024-11-08 17:14:50.483434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.116 [2024-11-08 17:14:50.606015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:14.117 [2024-11-08 17:14:50.606051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:14.375 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:14.375 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:27:14.375 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:14.375 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.375 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.633 [2024-11-08 17:14:51.090569] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.633 [2024-11-08 17:14:51.090618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.633 [2024-11-08 17:14:51.090627] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.633 [2024-11-08 17:14:51.090635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:14.633 "name": "Existed_Raid", 00:27:14.633 "uuid": "f80cae1e-1906-468e-a146-26c0291ea4f7", 00:27:14.633 "strip_size_kb": 0, 00:27:14.633 "state": "configuring", 00:27:14.633 "raid_level": "raid1", 00:27:14.633 "superblock": true, 00:27:14.633 "num_base_bdevs": 2, 00:27:14.633 "num_base_bdevs_discovered": 0, 00:27:14.633 "num_base_bdevs_operational": 2, 00:27:14.633 "base_bdevs_list": [ 00:27:14.633 { 00:27:14.633 "name": "BaseBdev1", 00:27:14.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.633 "is_configured": false, 00:27:14.633 "data_offset": 0, 00:27:14.633 "data_size": 0 00:27:14.633 }, 00:27:14.633 { 00:27:14.633 "name": "BaseBdev2", 00:27:14.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.633 "is_configured": false, 00:27:14.633 "data_offset": 0, 00:27:14.633 "data_size": 0 00:27:14.633 } 00:27:14.633 ] 00:27:14.633 }' 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:14.633 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.891 [2024-11-08 17:14:51.386598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:14.891 [2024-11-08 17:14:51.386631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.891 [2024-11-08 17:14:51.394601] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.891 [2024-11-08 17:14:51.394714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.891 [2024-11-08 17:14:51.394776] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.891 [2024-11-08 17:14:51.394803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.891 [2024-11-08 17:14:51.424774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.891 BaseBdev1 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.891 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.891 [ 00:27:14.891 { 00:27:14.891 "name": "BaseBdev1", 00:27:14.891 "aliases": [ 00:27:14.891 "3aba5f1f-5257-430a-b2a7-0539902d84f1" 00:27:14.891 ], 00:27:14.891 "product_name": "Malloc disk", 00:27:14.891 "block_size": 4096, 00:27:14.891 "num_blocks": 8192, 00:27:14.891 "uuid": "3aba5f1f-5257-430a-b2a7-0539902d84f1", 00:27:14.891 "assigned_rate_limits": { 00:27:14.891 "rw_ios_per_sec": 0, 00:27:14.891 "rw_mbytes_per_sec": 0, 00:27:14.891 "r_mbytes_per_sec": 0, 00:27:14.891 "w_mbytes_per_sec": 0 00:27:14.891 }, 00:27:14.891 "claimed": true, 00:27:14.891 "claim_type": "exclusive_write", 00:27:14.891 "zoned": false, 00:27:14.891 "supported_io_types": { 00:27:14.891 "read": true, 00:27:14.891 "write": true, 00:27:14.891 "unmap": true, 00:27:14.891 "flush": true, 00:27:14.891 "reset": true, 00:27:14.891 "nvme_admin": false, 00:27:14.891 "nvme_io": false, 00:27:14.891 "nvme_io_md": false, 00:27:14.891 "write_zeroes": true, 00:27:14.891 "zcopy": true, 00:27:14.891 "get_zone_info": false, 00:27:14.891 "zone_management": false, 00:27:14.891 "zone_append": false, 00:27:14.891 "compare": false, 00:27:14.891 "compare_and_write": false, 00:27:14.891 "abort": true, 00:27:14.891 "seek_hole": false, 00:27:14.891 "seek_data": false, 00:27:14.891 "copy": true, 00:27:14.891 "nvme_iov_md": false 00:27:14.891 }, 00:27:14.891 "memory_domains": [ 00:27:14.891 { 00:27:14.891 "dma_device_id": "system", 00:27:14.891 "dma_device_type": 1 00:27:14.891 }, 00:27:14.891 { 00:27:14.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.892 "dma_device_type": 2 00:27:14.892 } 00:27:14.892 ], 00:27:14.892 "driver_specific": {} 00:27:14.892 } 00:27:14.892 ] 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:14.892 "name": "Existed_Raid", 00:27:14.892 "uuid": "6135bc2a-68b7-4e22-a501-4cf779a83192", 00:27:14.892 "strip_size_kb": 0, 00:27:14.892 "state": "configuring", 00:27:14.892 "raid_level": "raid1", 00:27:14.892 "superblock": true, 00:27:14.892 "num_base_bdevs": 2, 00:27:14.892 "num_base_bdevs_discovered": 1, 00:27:14.892 "num_base_bdevs_operational": 2, 00:27:14.892 "base_bdevs_list": [ 00:27:14.892 { 00:27:14.892 "name": "BaseBdev1", 00:27:14.892 "uuid": "3aba5f1f-5257-430a-b2a7-0539902d84f1", 00:27:14.892 "is_configured": true, 00:27:14.892 "data_offset": 256, 00:27:14.892 "data_size": 7936 00:27:14.892 }, 00:27:14.892 { 00:27:14.892 "name": "BaseBdev2", 00:27:14.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.892 "is_configured": false, 00:27:14.892 "data_offset": 0, 00:27:14.892 "data_size": 0 00:27:14.892 } 00:27:14.892 ] 00:27:14.892 }' 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:14.892 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.150 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:15.150 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.150 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.151 [2024-11-08 17:14:51.736869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:15.151 [2024-11-08 17:14:51.736921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.151 [2024-11-08 17:14:51.744910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:15.151 [2024-11-08 17:14:51.746564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:15.151 [2024-11-08 17:14:51.746608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.151 "name": "Existed_Raid", 00:27:15.151 "uuid": "6344fc2b-edba-4a21-bdde-7975d835d865", 00:27:15.151 "strip_size_kb": 0, 00:27:15.151 "state": "configuring", 00:27:15.151 "raid_level": "raid1", 00:27:15.151 "superblock": true, 00:27:15.151 "num_base_bdevs": 2, 00:27:15.151 "num_base_bdevs_discovered": 1, 00:27:15.151 "num_base_bdevs_operational": 2, 00:27:15.151 "base_bdevs_list": [ 00:27:15.151 { 00:27:15.151 "name": "BaseBdev1", 00:27:15.151 "uuid": "3aba5f1f-5257-430a-b2a7-0539902d84f1", 00:27:15.151 "is_configured": true, 00:27:15.151 "data_offset": 256, 00:27:15.151 "data_size": 7936 00:27:15.151 }, 00:27:15.151 { 00:27:15.151 "name": "BaseBdev2", 00:27:15.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.151 "is_configured": false, 00:27:15.151 "data_offset": 0, 00:27:15.151 "data_size": 0 00:27:15.151 } 00:27:15.151 ] 00:27:15.151 }' 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.151 17:14:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.409 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:27:15.409 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.409 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 [2024-11-08 17:14:52.129385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:15.667 [2024-11-08 17:14:52.129597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:15.667 [2024-11-08 17:14:52.129609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:15.667 BaseBdev2 00:27:15.667 [2024-11-08 17:14:52.129854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:15.667 [2024-11-08 17:14:52.129982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:15.667 [2024-11-08 17:14:52.129991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:15.667 [2024-11-08 17:14:52.130107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 [ 00:27:15.667 { 00:27:15.667 "name": "BaseBdev2", 00:27:15.667 "aliases": [ 00:27:15.667 "375c65bb-418f-4778-9e86-db09e5e7470f" 00:27:15.667 ], 00:27:15.667 "product_name": "Malloc disk", 00:27:15.667 "block_size": 4096, 00:27:15.667 "num_blocks": 8192, 00:27:15.667 "uuid": "375c65bb-418f-4778-9e86-db09e5e7470f", 00:27:15.667 "assigned_rate_limits": { 00:27:15.667 "rw_ios_per_sec": 0, 00:27:15.667 "rw_mbytes_per_sec": 0, 00:27:15.667 "r_mbytes_per_sec": 0, 00:27:15.667 "w_mbytes_per_sec": 0 00:27:15.667 }, 00:27:15.667 "claimed": true, 00:27:15.667 "claim_type": "exclusive_write", 00:27:15.667 "zoned": false, 00:27:15.667 "supported_io_types": { 00:27:15.667 "read": true, 00:27:15.667 "write": true, 00:27:15.667 "unmap": true, 00:27:15.667 "flush": true, 00:27:15.667 "reset": true, 00:27:15.667 "nvme_admin": false, 00:27:15.667 "nvme_io": false, 00:27:15.667 "nvme_io_md": false, 00:27:15.667 "write_zeroes": true, 00:27:15.667 "zcopy": true, 00:27:15.667 "get_zone_info": false, 00:27:15.667 "zone_management": false, 00:27:15.667 "zone_append": false, 00:27:15.667 "compare": false, 00:27:15.667 "compare_and_write": false, 00:27:15.667 "abort": true, 00:27:15.667 "seek_hole": false, 00:27:15.667 "seek_data": false, 00:27:15.667 "copy": true, 00:27:15.667 "nvme_iov_md": false 00:27:15.667 }, 00:27:15.667 "memory_domains": [ 00:27:15.667 { 00:27:15.667 "dma_device_id": "system", 00:27:15.667 "dma_device_type": 1 00:27:15.667 }, 00:27:15.667 { 00:27:15.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.667 "dma_device_type": 2 00:27:15.667 } 00:27:15.667 ], 00:27:15.667 "driver_specific": {} 00:27:15.667 } 00:27:15.667 ] 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.667 "name": "Existed_Raid", 00:27:15.667 "uuid": "6344fc2b-edba-4a21-bdde-7975d835d865", 00:27:15.667 "strip_size_kb": 0, 00:27:15.667 "state": "online", 00:27:15.667 "raid_level": "raid1", 00:27:15.667 "superblock": true, 00:27:15.667 "num_base_bdevs": 2, 00:27:15.667 "num_base_bdevs_discovered": 2, 00:27:15.667 "num_base_bdevs_operational": 2, 00:27:15.667 "base_bdevs_list": [ 00:27:15.667 { 00:27:15.667 "name": "BaseBdev1", 00:27:15.667 "uuid": "3aba5f1f-5257-430a-b2a7-0539902d84f1", 00:27:15.667 "is_configured": true, 00:27:15.667 "data_offset": 256, 00:27:15.667 "data_size": 7936 00:27:15.667 }, 00:27:15.667 { 00:27:15.667 "name": "BaseBdev2", 00:27:15.667 "uuid": "375c65bb-418f-4778-9e86-db09e5e7470f", 00:27:15.667 "is_configured": true, 00:27:15.667 "data_offset": 256, 00:27:15.667 "data_size": 7936 00:27:15.667 } 00:27:15.667 ] 00:27:15.667 }' 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.667 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.925 [2024-11-08 17:14:52.485812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.925 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.925 "name": "Existed_Raid", 00:27:15.925 "aliases": [ 00:27:15.925 "6344fc2b-edba-4a21-bdde-7975d835d865" 00:27:15.925 ], 00:27:15.925 "product_name": "Raid Volume", 00:27:15.925 "block_size": 4096, 00:27:15.925 "num_blocks": 7936, 00:27:15.925 "uuid": "6344fc2b-edba-4a21-bdde-7975d835d865", 00:27:15.925 "assigned_rate_limits": { 00:27:15.925 "rw_ios_per_sec": 0, 00:27:15.925 "rw_mbytes_per_sec": 0, 00:27:15.925 "r_mbytes_per_sec": 0, 00:27:15.925 "w_mbytes_per_sec": 0 00:27:15.925 }, 00:27:15.925 "claimed": false, 00:27:15.925 "zoned": false, 00:27:15.925 "supported_io_types": { 00:27:15.925 "read": true, 00:27:15.925 "write": true, 00:27:15.926 "unmap": false, 00:27:15.926 "flush": false, 00:27:15.926 "reset": true, 00:27:15.926 "nvme_admin": false, 00:27:15.926 "nvme_io": false, 00:27:15.926 "nvme_io_md": false, 00:27:15.926 "write_zeroes": true, 00:27:15.926 "zcopy": false, 00:27:15.926 "get_zone_info": false, 00:27:15.926 "zone_management": false, 00:27:15.926 "zone_append": false, 00:27:15.926 "compare": false, 00:27:15.926 "compare_and_write": false, 00:27:15.926 "abort": false, 00:27:15.926 "seek_hole": false, 00:27:15.926 "seek_data": false, 00:27:15.926 "copy": false, 00:27:15.926 "nvme_iov_md": false 00:27:15.926 }, 00:27:15.926 "memory_domains": [ 00:27:15.926 { 00:27:15.926 "dma_device_id": "system", 00:27:15.926 "dma_device_type": 1 00:27:15.926 }, 00:27:15.926 { 00:27:15.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.926 "dma_device_type": 2 00:27:15.926 }, 00:27:15.926 { 00:27:15.926 "dma_device_id": "system", 00:27:15.926 "dma_device_type": 1 00:27:15.926 }, 00:27:15.926 { 00:27:15.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.926 "dma_device_type": 2 00:27:15.926 } 00:27:15.926 ], 00:27:15.926 "driver_specific": { 00:27:15.926 "raid": { 00:27:15.926 "uuid": "6344fc2b-edba-4a21-bdde-7975d835d865", 00:27:15.926 "strip_size_kb": 0, 00:27:15.926 "state": "online", 00:27:15.926 "raid_level": "raid1", 00:27:15.926 "superblock": true, 00:27:15.926 "num_base_bdevs": 2, 00:27:15.926 "num_base_bdevs_discovered": 2, 00:27:15.926 "num_base_bdevs_operational": 2, 00:27:15.926 "base_bdevs_list": [ 00:27:15.926 { 00:27:15.926 "name": "BaseBdev1", 00:27:15.926 "uuid": "3aba5f1f-5257-430a-b2a7-0539902d84f1", 00:27:15.926 "is_configured": true, 00:27:15.926 "data_offset": 256, 00:27:15.926 "data_size": 7936 00:27:15.926 }, 00:27:15.926 { 00:27:15.926 "name": "BaseBdev2", 00:27:15.926 "uuid": "375c65bb-418f-4778-9e86-db09e5e7470f", 00:27:15.926 "is_configured": true, 00:27:15.926 "data_offset": 256, 00:27:15.926 "data_size": 7936 00:27:15.926 } 00:27:15.926 ] 00:27:15.926 } 00:27:15.926 } 00:27:15.926 }' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:15.926 BaseBdev2' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.926 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.185 [2024-11-08 17:14:52.665613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:16.185 "name": "Existed_Raid", 00:27:16.185 "uuid": "6344fc2b-edba-4a21-bdde-7975d835d865", 00:27:16.185 "strip_size_kb": 0, 00:27:16.185 "state": "online", 00:27:16.185 "raid_level": "raid1", 00:27:16.185 "superblock": true, 00:27:16.185 "num_base_bdevs": 2, 00:27:16.185 "num_base_bdevs_discovered": 1, 00:27:16.185 "num_base_bdevs_operational": 1, 00:27:16.185 "base_bdevs_list": [ 00:27:16.185 { 00:27:16.185 "name": null, 00:27:16.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.185 "is_configured": false, 00:27:16.185 "data_offset": 0, 00:27:16.185 "data_size": 7936 00:27:16.185 }, 00:27:16.185 { 00:27:16.185 "name": "BaseBdev2", 00:27:16.185 "uuid": "375c65bb-418f-4778-9e86-db09e5e7470f", 00:27:16.185 "is_configured": true, 00:27:16.185 "data_offset": 256, 00:27:16.185 "data_size": 7936 00:27:16.185 } 00:27:16.185 ] 00:27:16.185 }' 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:16.185 17:14:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.443 [2024-11-08 17:14:53.081277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:16.443 [2024-11-08 17:14:53.081378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:16.443 [2024-11-08 17:14:53.130730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:16.443 [2024-11-08 17:14:53.130908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:16.443 [2024-11-08 17:14:53.130971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:16.443 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:16.444 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.444 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.444 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.444 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:16.444 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.702 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:16.702 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:16.702 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:16.702 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 84182 00:27:16.702 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 84182 ']' 00:27:16.702 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 84182 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84182 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84182' 00:27:16.703 killing process with pid 84182 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 84182 00:27:16.703 [2024-11-08 17:14:53.197308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:16.703 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 84182 00:27:16.703 [2024-11-08 17:14:53.206144] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:17.295 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:27:17.295 00:27:17.295 real 0m3.638s 00:27:17.295 user 0m5.310s 00:27:17.295 sys 0m0.599s 00:27:17.295 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:17.295 ************************************ 00:27:17.295 END TEST raid_state_function_test_sb_4k 00:27:17.295 ************************************ 00:27:17.295 17:14:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:17.295 17:14:53 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:27:17.295 17:14:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:17.295 17:14:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:17.295 17:14:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:17.295 ************************************ 00:27:17.295 START TEST raid_superblock_test_4k 00:27:17.295 ************************************ 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:17.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=84418 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 84418 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 84418 ']' 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:17.295 17:14:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:17.295 [2024-11-08 17:14:53.921162] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:27:17.295 [2024-11-08 17:14:53.921299] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84418 ] 00:27:17.553 [2024-11-08 17:14:54.083970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.553 [2024-11-08 17:14:54.200334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.812 [2024-11-08 17:14:54.346077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:17.812 [2024-11-08 17:14:54.346121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.378 malloc1 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.378 [2024-11-08 17:14:54.838972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:18.378 [2024-11-08 17:14:54.839154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.378 [2024-11-08 17:14:54.839184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:18.378 [2024-11-08 17:14:54.839196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.378 [2024-11-08 17:14:54.841461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.378 [2024-11-08 17:14:54.841491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:18.378 pt1 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.378 malloc2 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.378 [2024-11-08 17:14:54.876746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:18.378 [2024-11-08 17:14:54.876806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.378 [2024-11-08 17:14:54.876829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:18.378 [2024-11-08 17:14:54.876838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.378 [2024-11-08 17:14:54.879309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.378 [2024-11-08 17:14:54.879344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:18.378 pt2 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.378 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.379 [2024-11-08 17:14:54.884825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:18.379 [2024-11-08 17:14:54.886798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:18.379 [2024-11-08 17:14:54.886961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:18.379 [2024-11-08 17:14:54.886977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:18.379 [2024-11-08 17:14:54.887225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:18.379 [2024-11-08 17:14:54.887382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:18.379 [2024-11-08 17:14:54.887397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:18.379 [2024-11-08 17:14:54.887539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:18.379 "name": "raid_bdev1", 00:27:18.379 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:18.379 "strip_size_kb": 0, 00:27:18.379 "state": "online", 00:27:18.379 "raid_level": "raid1", 00:27:18.379 "superblock": true, 00:27:18.379 "num_base_bdevs": 2, 00:27:18.379 "num_base_bdevs_discovered": 2, 00:27:18.379 "num_base_bdevs_operational": 2, 00:27:18.379 "base_bdevs_list": [ 00:27:18.379 { 00:27:18.379 "name": "pt1", 00:27:18.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:18.379 "is_configured": true, 00:27:18.379 "data_offset": 256, 00:27:18.379 "data_size": 7936 00:27:18.379 }, 00:27:18.379 { 00:27:18.379 "name": "pt2", 00:27:18.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:18.379 "is_configured": true, 00:27:18.379 "data_offset": 256, 00:27:18.379 "data_size": 7936 00:27:18.379 } 00:27:18.379 ] 00:27:18.379 }' 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:18.379 17:14:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.639 [2024-11-08 17:14:55.217187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.639 "name": "raid_bdev1", 00:27:18.639 "aliases": [ 00:27:18.639 "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef" 00:27:18.639 ], 00:27:18.639 "product_name": "Raid Volume", 00:27:18.639 "block_size": 4096, 00:27:18.639 "num_blocks": 7936, 00:27:18.639 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:18.639 "assigned_rate_limits": { 00:27:18.639 "rw_ios_per_sec": 0, 00:27:18.639 "rw_mbytes_per_sec": 0, 00:27:18.639 "r_mbytes_per_sec": 0, 00:27:18.639 "w_mbytes_per_sec": 0 00:27:18.639 }, 00:27:18.639 "claimed": false, 00:27:18.639 "zoned": false, 00:27:18.639 "supported_io_types": { 00:27:18.639 "read": true, 00:27:18.639 "write": true, 00:27:18.639 "unmap": false, 00:27:18.639 "flush": false, 00:27:18.639 "reset": true, 00:27:18.639 "nvme_admin": false, 00:27:18.639 "nvme_io": false, 00:27:18.639 "nvme_io_md": false, 00:27:18.639 "write_zeroes": true, 00:27:18.639 "zcopy": false, 00:27:18.639 "get_zone_info": false, 00:27:18.639 "zone_management": false, 00:27:18.639 "zone_append": false, 00:27:18.639 "compare": false, 00:27:18.639 "compare_and_write": false, 00:27:18.639 "abort": false, 00:27:18.639 "seek_hole": false, 00:27:18.639 "seek_data": false, 00:27:18.639 "copy": false, 00:27:18.639 "nvme_iov_md": false 00:27:18.639 }, 00:27:18.639 "memory_domains": [ 00:27:18.639 { 00:27:18.639 "dma_device_id": "system", 00:27:18.639 "dma_device_type": 1 00:27:18.639 }, 00:27:18.639 { 00:27:18.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.639 "dma_device_type": 2 00:27:18.639 }, 00:27:18.639 { 00:27:18.639 "dma_device_id": "system", 00:27:18.639 "dma_device_type": 1 00:27:18.639 }, 00:27:18.639 { 00:27:18.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.639 "dma_device_type": 2 00:27:18.639 } 00:27:18.639 ], 00:27:18.639 "driver_specific": { 00:27:18.639 "raid": { 00:27:18.639 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:18.639 "strip_size_kb": 0, 00:27:18.639 "state": "online", 00:27:18.639 "raid_level": "raid1", 00:27:18.639 "superblock": true, 00:27:18.639 "num_base_bdevs": 2, 00:27:18.639 "num_base_bdevs_discovered": 2, 00:27:18.639 "num_base_bdevs_operational": 2, 00:27:18.639 "base_bdevs_list": [ 00:27:18.639 { 00:27:18.639 "name": "pt1", 00:27:18.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:18.639 "is_configured": true, 00:27:18.639 "data_offset": 256, 00:27:18.639 "data_size": 7936 00:27:18.639 }, 00:27:18.639 { 00:27:18.639 "name": "pt2", 00:27:18.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:18.639 "is_configured": true, 00:27:18.639 "data_offset": 256, 00:27:18.639 "data_size": 7936 00:27:18.639 } 00:27:18.639 ] 00:27:18.639 } 00:27:18.639 } 00:27:18.639 }' 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:18.639 pt2' 00:27:18.639 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.900 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:18.900 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:18.900 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.900 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 [2024-11-08 17:14:55.445246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef ']' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 [2024-11-08 17:14:55.476911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.901 [2024-11-08 17:14:55.476936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:18.901 [2024-11-08 17:14:55.477029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.901 [2024-11-08 17:14:55.477094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.901 [2024-11-08 17:14:55.477106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 [2024-11-08 17:14:55.560963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:18.901 [2024-11-08 17:14:55.562991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:18.901 [2024-11-08 17:14:55.563059] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:18.901 [2024-11-08 17:14:55.563114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:18.901 [2024-11-08 17:14:55.563129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.901 [2024-11-08 17:14:55.563141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:18.901 request: 00:27:18.901 { 00:27:18.901 "name": "raid_bdev1", 00:27:18.901 "raid_level": "raid1", 00:27:18.901 "base_bdevs": [ 00:27:18.901 "malloc1", 00:27:18.901 "malloc2" 00:27:18.901 ], 00:27:18.901 "superblock": false, 00:27:18.901 "method": "bdev_raid_create", 00:27:18.901 "req_id": 1 00:27:18.901 } 00:27:18.901 Got JSON-RPC error response 00:27:18.901 response: 00:27:18.901 { 00:27:18.901 "code": -17, 00:27:18.901 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:18.901 } 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.901 [2024-11-08 17:14:55.604939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:18.901 [2024-11-08 17:14:55.604994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.901 [2024-11-08 17:14:55.605013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:18.901 [2024-11-08 17:14:55.605025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.901 [2024-11-08 17:14:55.607383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.901 [2024-11-08 17:14:55.607419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:18.901 [2024-11-08 17:14:55.607500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:18.901 [2024-11-08 17:14:55.607563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:18.901 pt1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:18.901 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:18.902 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.902 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.902 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.902 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.160 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.160 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.160 "name": "raid_bdev1", 00:27:19.160 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:19.160 "strip_size_kb": 0, 00:27:19.160 "state": "configuring", 00:27:19.160 "raid_level": "raid1", 00:27:19.160 "superblock": true, 00:27:19.160 "num_base_bdevs": 2, 00:27:19.160 "num_base_bdevs_discovered": 1, 00:27:19.160 "num_base_bdevs_operational": 2, 00:27:19.160 "base_bdevs_list": [ 00:27:19.160 { 00:27:19.160 "name": "pt1", 00:27:19.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:19.160 "is_configured": true, 00:27:19.160 "data_offset": 256, 00:27:19.160 "data_size": 7936 00:27:19.160 }, 00:27:19.160 { 00:27:19.160 "name": null, 00:27:19.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:19.160 "is_configured": false, 00:27:19.160 "data_offset": 256, 00:27:19.160 "data_size": 7936 00:27:19.160 } 00:27:19.160 ] 00:27:19.160 }' 00:27:19.160 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.161 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 [2024-11-08 17:14:55.933043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:19.419 [2024-11-08 17:14:55.933113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.419 [2024-11-08 17:14:55.933133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:19.419 [2024-11-08 17:14:55.933144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.419 [2024-11-08 17:14:55.933575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.419 [2024-11-08 17:14:55.933596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:19.419 [2024-11-08 17:14:55.933671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:19.419 [2024-11-08 17:14:55.933692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:19.419 [2024-11-08 17:14:55.933806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:19.419 [2024-11-08 17:14:55.933817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:19.419 [2024-11-08 17:14:55.934025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:19.419 [2024-11-08 17:14:55.934150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:19.419 [2024-11-08 17:14:55.934157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:19.419 [2024-11-08 17:14:55.934272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.419 pt2 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.419 "name": "raid_bdev1", 00:27:19.419 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:19.419 "strip_size_kb": 0, 00:27:19.419 "state": "online", 00:27:19.419 "raid_level": "raid1", 00:27:19.419 "superblock": true, 00:27:19.419 "num_base_bdevs": 2, 00:27:19.419 "num_base_bdevs_discovered": 2, 00:27:19.419 "num_base_bdevs_operational": 2, 00:27:19.419 "base_bdevs_list": [ 00:27:19.419 { 00:27:19.419 "name": "pt1", 00:27:19.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:19.419 "is_configured": true, 00:27:19.419 "data_offset": 256, 00:27:19.419 "data_size": 7936 00:27:19.419 }, 00:27:19.419 { 00:27:19.419 "name": "pt2", 00:27:19.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:19.419 "is_configured": true, 00:27:19.419 "data_offset": 256, 00:27:19.419 "data_size": 7936 00:27:19.419 } 00:27:19.419 ] 00:27:19.419 }' 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.419 17:14:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:19.677 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:19.678 [2024-11-08 17:14:56.249334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:19.678 "name": "raid_bdev1", 00:27:19.678 "aliases": [ 00:27:19.678 "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef" 00:27:19.678 ], 00:27:19.678 "product_name": "Raid Volume", 00:27:19.678 "block_size": 4096, 00:27:19.678 "num_blocks": 7936, 00:27:19.678 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:19.678 "assigned_rate_limits": { 00:27:19.678 "rw_ios_per_sec": 0, 00:27:19.678 "rw_mbytes_per_sec": 0, 00:27:19.678 "r_mbytes_per_sec": 0, 00:27:19.678 "w_mbytes_per_sec": 0 00:27:19.678 }, 00:27:19.678 "claimed": false, 00:27:19.678 "zoned": false, 00:27:19.678 "supported_io_types": { 00:27:19.678 "read": true, 00:27:19.678 "write": true, 00:27:19.678 "unmap": false, 00:27:19.678 "flush": false, 00:27:19.678 "reset": true, 00:27:19.678 "nvme_admin": false, 00:27:19.678 "nvme_io": false, 00:27:19.678 "nvme_io_md": false, 00:27:19.678 "write_zeroes": true, 00:27:19.678 "zcopy": false, 00:27:19.678 "get_zone_info": false, 00:27:19.678 "zone_management": false, 00:27:19.678 "zone_append": false, 00:27:19.678 "compare": false, 00:27:19.678 "compare_and_write": false, 00:27:19.678 "abort": false, 00:27:19.678 "seek_hole": false, 00:27:19.678 "seek_data": false, 00:27:19.678 "copy": false, 00:27:19.678 "nvme_iov_md": false 00:27:19.678 }, 00:27:19.678 "memory_domains": [ 00:27:19.678 { 00:27:19.678 "dma_device_id": "system", 00:27:19.678 "dma_device_type": 1 00:27:19.678 }, 00:27:19.678 { 00:27:19.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:19.678 "dma_device_type": 2 00:27:19.678 }, 00:27:19.678 { 00:27:19.678 "dma_device_id": "system", 00:27:19.678 "dma_device_type": 1 00:27:19.678 }, 00:27:19.678 { 00:27:19.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:19.678 "dma_device_type": 2 00:27:19.678 } 00:27:19.678 ], 00:27:19.678 "driver_specific": { 00:27:19.678 "raid": { 00:27:19.678 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:19.678 "strip_size_kb": 0, 00:27:19.678 "state": "online", 00:27:19.678 "raid_level": "raid1", 00:27:19.678 "superblock": true, 00:27:19.678 "num_base_bdevs": 2, 00:27:19.678 "num_base_bdevs_discovered": 2, 00:27:19.678 "num_base_bdevs_operational": 2, 00:27:19.678 "base_bdevs_list": [ 00:27:19.678 { 00:27:19.678 "name": "pt1", 00:27:19.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:19.678 "is_configured": true, 00:27:19.678 "data_offset": 256, 00:27:19.678 "data_size": 7936 00:27:19.678 }, 00:27:19.678 { 00:27:19.678 "name": "pt2", 00:27:19.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:19.678 "is_configured": true, 00:27:19.678 "data_offset": 256, 00:27:19.678 "data_size": 7936 00:27:19.678 } 00:27:19.678 ] 00:27:19.678 } 00:27:19.678 } 00:27:19.678 }' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:19.678 pt2' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.678 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 [2024-11-08 17:14:56.397343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef '!=' d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef ']' 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 [2024-11-08 17:14:56.425159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.936 "name": "raid_bdev1", 00:27:19.936 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:19.936 "strip_size_kb": 0, 00:27:19.936 "state": "online", 00:27:19.936 "raid_level": "raid1", 00:27:19.936 "superblock": true, 00:27:19.936 "num_base_bdevs": 2, 00:27:19.936 "num_base_bdevs_discovered": 1, 00:27:19.936 "num_base_bdevs_operational": 1, 00:27:19.936 "base_bdevs_list": [ 00:27:19.936 { 00:27:19.936 "name": null, 00:27:19.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.936 "is_configured": false, 00:27:19.936 "data_offset": 0, 00:27:19.937 "data_size": 7936 00:27:19.937 }, 00:27:19.937 { 00:27:19.937 "name": "pt2", 00:27:19.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:19.937 "is_configured": true, 00:27:19.937 "data_offset": 256, 00:27:19.937 "data_size": 7936 00:27:19.937 } 00:27:19.937 ] 00:27:19.937 }' 00:27:19.937 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.937 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.195 [2024-11-08 17:14:56.733202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:20.195 [2024-11-08 17:14:56.733227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:20.195 [2024-11-08 17:14:56.733300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:20.195 [2024-11-08 17:14:56.733344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:20.195 [2024-11-08 17:14:56.733355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.195 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.195 [2024-11-08 17:14:56.777171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:20.195 [2024-11-08 17:14:56.777223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.195 [2024-11-08 17:14:56.777239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:20.196 [2024-11-08 17:14:56.777249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.196 [2024-11-08 17:14:56.779251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.196 [2024-11-08 17:14:56.779284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:20.196 [2024-11-08 17:14:56.779354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:20.196 [2024-11-08 17:14:56.779397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:20.196 [2024-11-08 17:14:56.779482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:20.196 [2024-11-08 17:14:56.779494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:20.196 [2024-11-08 17:14:56.779700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:20.196 [2024-11-08 17:14:56.779837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:20.196 [2024-11-08 17:14:56.779844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:20.196 [2024-11-08 17:14:56.779959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.196 pt2 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:20.196 "name": "raid_bdev1", 00:27:20.196 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:20.196 "strip_size_kb": 0, 00:27:20.196 "state": "online", 00:27:20.196 "raid_level": "raid1", 00:27:20.196 "superblock": true, 00:27:20.196 "num_base_bdevs": 2, 00:27:20.196 "num_base_bdevs_discovered": 1, 00:27:20.196 "num_base_bdevs_operational": 1, 00:27:20.196 "base_bdevs_list": [ 00:27:20.196 { 00:27:20.196 "name": null, 00:27:20.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.196 "is_configured": false, 00:27:20.196 "data_offset": 256, 00:27:20.196 "data_size": 7936 00:27:20.196 }, 00:27:20.196 { 00:27:20.196 "name": "pt2", 00:27:20.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:20.196 "is_configured": true, 00:27:20.196 "data_offset": 256, 00:27:20.196 "data_size": 7936 00:27:20.196 } 00:27:20.196 ] 00:27:20.196 }' 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:20.196 17:14:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.454 [2024-11-08 17:14:57.073221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:20.454 [2024-11-08 17:14:57.073249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:20.454 [2024-11-08 17:14:57.073319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:20.454 [2024-11-08 17:14:57.073366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:20.454 [2024-11-08 17:14:57.073374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.454 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.454 [2024-11-08 17:14:57.113236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:20.454 [2024-11-08 17:14:57.113292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.454 [2024-11-08 17:14:57.113311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:20.454 [2024-11-08 17:14:57.113320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.454 [2024-11-08 17:14:57.115350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.454 [2024-11-08 17:14:57.115373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:20.454 [2024-11-08 17:14:57.115449] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:20.454 [2024-11-08 17:14:57.115491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:20.454 [2024-11-08 17:14:57.115602] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:20.454 [2024-11-08 17:14:57.115611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:20.454 [2024-11-08 17:14:57.115625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:20.454 [2024-11-08 17:14:57.115670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:20.454 [2024-11-08 17:14:57.115736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:20.455 [2024-11-08 17:14:57.115743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:20.455 [2024-11-08 17:14:57.115975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:20.455 [2024-11-08 17:14:57.116086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:20.455 [2024-11-08 17:14:57.116094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:20.455 [2024-11-08 17:14:57.116207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.455 pt1 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:20.455 "name": "raid_bdev1", 00:27:20.455 "uuid": "d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef", 00:27:20.455 "strip_size_kb": 0, 00:27:20.455 "state": "online", 00:27:20.455 "raid_level": "raid1", 00:27:20.455 "superblock": true, 00:27:20.455 "num_base_bdevs": 2, 00:27:20.455 "num_base_bdevs_discovered": 1, 00:27:20.455 "num_base_bdevs_operational": 1, 00:27:20.455 "base_bdevs_list": [ 00:27:20.455 { 00:27:20.455 "name": null, 00:27:20.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.455 "is_configured": false, 00:27:20.455 "data_offset": 256, 00:27:20.455 "data_size": 7936 00:27:20.455 }, 00:27:20.455 { 00:27:20.455 "name": "pt2", 00:27:20.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:20.455 "is_configured": true, 00:27:20.455 "data_offset": 256, 00:27:20.455 "data_size": 7936 00:27:20.455 } 00:27:20.455 ] 00:27:20.455 }' 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:20.455 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.715 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:20.715 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:20.715 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.715 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.715 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.987 [2024-11-08 17:14:57.457524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef '!=' d86aa4fe-3998-44a9-a5c3-80ecb1ded7ef ']' 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 84418 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 84418 ']' 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 84418 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84418 00:27:20.987 killing process with pid 84418 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84418' 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 84418 00:27:20.987 [2024-11-08 17:14:57.507033] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:20.987 17:14:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 84418 00:27:20.987 [2024-11-08 17:14:57.507121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:20.987 [2024-11-08 17:14:57.507170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:20.988 [2024-11-08 17:14:57.507183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:20.988 [2024-11-08 17:14:57.612732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:21.554 17:14:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:27:21.554 00:27:21.554 real 0m4.354s 00:27:21.554 user 0m6.634s 00:27:21.554 sys 0m0.752s 00:27:21.554 ************************************ 00:27:21.554 END TEST raid_superblock_test_4k 00:27:21.554 ************************************ 00:27:21.555 17:14:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:21.555 17:14:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.555 17:14:58 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:27:21.555 17:14:58 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:27:21.555 17:14:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:27:21.555 17:14:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:21.555 17:14:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:21.555 ************************************ 00:27:21.555 START TEST raid_rebuild_test_sb_4k 00:27:21.555 ************************************ 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:21.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=84724 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 84724 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 84724 ']' 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.555 17:14:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:21.822 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:21.822 Zero copy mechanism will not be used. 00:27:21.822 [2024-11-08 17:14:58.346230] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:27:21.822 [2024-11-08 17:14:58.346413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84724 ] 00:27:21.822 [2024-11-08 17:14:58.522229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.080 [2024-11-08 17:14:58.642678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.338 [2024-11-08 17:14:58.794717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.338 [2024-11-08 17:14:58.794799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.596 BaseBdev1_malloc 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.596 [2024-11-08 17:14:59.295739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:22.596 [2024-11-08 17:14:59.295824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.596 [2024-11-08 17:14:59.295849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:22.596 [2024-11-08 17:14:59.295861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.596 [2024-11-08 17:14:59.298154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.596 [2024-11-08 17:14:59.298190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:22.596 BaseBdev1 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.596 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.854 BaseBdev2_malloc 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.854 [2024-11-08 17:14:59.333991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:22.854 [2024-11-08 17:14:59.334052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.854 [2024-11-08 17:14:59.334071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:22.854 [2024-11-08 17:14:59.334081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.854 [2024-11-08 17:14:59.336309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.854 [2024-11-08 17:14:59.336346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:22.854 BaseBdev2 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.854 spare_malloc 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.854 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.855 spare_delay 00:27:22.855 spare 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.855 [2024-11-08 17:14:59.393253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:22.855 [2024-11-08 17:14:59.393421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.855 [2024-11-08 17:14:59.393445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:22.855 [2024-11-08 17:14:59.393456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.855 [2024-11-08 17:14:59.395740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.855 [2024-11-08 17:14:59.395792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.855 [2024-11-08 17:14:59.401321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.855 [2024-11-08 17:14:59.403274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:22.855 [2024-11-08 17:14:59.403443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:22.855 [2024-11-08 17:14:59.403459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:22.855 [2024-11-08 17:14:59.403712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:22.855 [2024-11-08 17:14:59.403888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:22.855 [2024-11-08 17:14:59.403897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:22.855 [2024-11-08 17:14:59.404038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.855 "name": "raid_bdev1", 00:27:22.855 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:22.855 "strip_size_kb": 0, 00:27:22.855 "state": "online", 00:27:22.855 "raid_level": "raid1", 00:27:22.855 "superblock": true, 00:27:22.855 "num_base_bdevs": 2, 00:27:22.855 "num_base_bdevs_discovered": 2, 00:27:22.855 "num_base_bdevs_operational": 2, 00:27:22.855 "base_bdevs_list": [ 00:27:22.855 { 00:27:22.855 "name": "BaseBdev1", 00:27:22.855 "uuid": "c670a90b-f0e8-5881-a284-64f148fb50a1", 00:27:22.855 "is_configured": true, 00:27:22.855 "data_offset": 256, 00:27:22.855 "data_size": 7936 00:27:22.855 }, 00:27:22.855 { 00:27:22.855 "name": "BaseBdev2", 00:27:22.855 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:22.855 "is_configured": true, 00:27:22.855 "data_offset": 256, 00:27:22.855 "data_size": 7936 00:27:22.855 } 00:27:22.855 ] 00:27:22.855 }' 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.855 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:23.113 [2024-11-08 17:14:59.721722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:23.113 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:23.371 [2024-11-08 17:14:59.965528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:23.371 /dev/nbd0 00:27:23.371 17:14:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:23.371 1+0 records in 00:27:23.371 1+0 records out 00:27:23.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264273 s, 15.5 MB/s 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:23.371 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:24.305 7936+0 records in 00:27:24.305 7936+0 records out 00:27:24.305 32505856 bytes (33 MB, 31 MiB) copied, 0.732711 s, 44.4 MB/s 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:24.305 [2024-11-08 17:15:00.968922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.305 [2024-11-08 17:15:00.981228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.305 17:15:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.305 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.568 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.568 "name": "raid_bdev1", 00:27:24.568 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:24.568 "strip_size_kb": 0, 00:27:24.568 "state": "online", 00:27:24.568 "raid_level": "raid1", 00:27:24.568 "superblock": true, 00:27:24.568 "num_base_bdevs": 2, 00:27:24.568 "num_base_bdevs_discovered": 1, 00:27:24.568 "num_base_bdevs_operational": 1, 00:27:24.568 "base_bdevs_list": [ 00:27:24.568 { 00:27:24.568 "name": null, 00:27:24.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.568 "is_configured": false, 00:27:24.568 "data_offset": 0, 00:27:24.568 "data_size": 7936 00:27:24.568 }, 00:27:24.568 { 00:27:24.568 "name": "BaseBdev2", 00:27:24.568 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:24.568 "is_configured": true, 00:27:24.568 "data_offset": 256, 00:27:24.568 "data_size": 7936 00:27:24.568 } 00:27:24.568 ] 00:27:24.568 }' 00:27:24.568 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.568 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.826 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.826 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.826 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.826 [2024-11-08 17:15:01.293323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.826 [2024-11-08 17:15:01.305625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:27:24.826 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.826 17:15:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:24.826 [2024-11-08 17:15:01.307733] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:25.761 "name": "raid_bdev1", 00:27:25.761 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:25.761 "strip_size_kb": 0, 00:27:25.761 "state": "online", 00:27:25.761 "raid_level": "raid1", 00:27:25.761 "superblock": true, 00:27:25.761 "num_base_bdevs": 2, 00:27:25.761 "num_base_bdevs_discovered": 2, 00:27:25.761 "num_base_bdevs_operational": 2, 00:27:25.761 "process": { 00:27:25.761 "type": "rebuild", 00:27:25.761 "target": "spare", 00:27:25.761 "progress": { 00:27:25.761 "blocks": 2560, 00:27:25.761 "percent": 32 00:27:25.761 } 00:27:25.761 }, 00:27:25.761 "base_bdevs_list": [ 00:27:25.761 { 00:27:25.761 "name": "spare", 00:27:25.761 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:25.761 "is_configured": true, 00:27:25.761 "data_offset": 256, 00:27:25.761 "data_size": 7936 00:27:25.761 }, 00:27:25.761 { 00:27:25.761 "name": "BaseBdev2", 00:27:25.761 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:25.761 "is_configured": true, 00:27:25.761 "data_offset": 256, 00:27:25.761 "data_size": 7936 00:27:25.761 } 00:27:25.761 ] 00:27:25.761 }' 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.761 [2024-11-08 17:15:02.409672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:25.761 [2024-11-08 17:15:02.414801] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:25.761 [2024-11-08 17:15:02.414962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.761 [2024-11-08 17:15:02.414981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:25.761 [2024-11-08 17:15:02.414993] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.761 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.019 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.019 "name": "raid_bdev1", 00:27:26.019 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:26.019 "strip_size_kb": 0, 00:27:26.019 "state": "online", 00:27:26.019 "raid_level": "raid1", 00:27:26.019 "superblock": true, 00:27:26.019 "num_base_bdevs": 2, 00:27:26.019 "num_base_bdevs_discovered": 1, 00:27:26.019 "num_base_bdevs_operational": 1, 00:27:26.019 "base_bdevs_list": [ 00:27:26.019 { 00:27:26.019 "name": null, 00:27:26.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.019 "is_configured": false, 00:27:26.019 "data_offset": 0, 00:27:26.019 "data_size": 7936 00:27:26.019 }, 00:27:26.019 { 00:27:26.019 "name": "BaseBdev2", 00:27:26.019 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:26.019 "is_configured": true, 00:27:26.019 "data_offset": 256, 00:27:26.019 "data_size": 7936 00:27:26.019 } 00:27:26.019 ] 00:27:26.019 }' 00:27:26.019 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.019 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.277 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:26.277 "name": "raid_bdev1", 00:27:26.277 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:26.277 "strip_size_kb": 0, 00:27:26.277 "state": "online", 00:27:26.277 "raid_level": "raid1", 00:27:26.277 "superblock": true, 00:27:26.277 "num_base_bdevs": 2, 00:27:26.277 "num_base_bdevs_discovered": 1, 00:27:26.277 "num_base_bdevs_operational": 1, 00:27:26.277 "base_bdevs_list": [ 00:27:26.277 { 00:27:26.277 "name": null, 00:27:26.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.277 "is_configured": false, 00:27:26.277 "data_offset": 0, 00:27:26.277 "data_size": 7936 00:27:26.277 }, 00:27:26.277 { 00:27:26.278 "name": "BaseBdev2", 00:27:26.278 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:26.278 "is_configured": true, 00:27:26.278 "data_offset": 256, 00:27:26.278 "data_size": 7936 00:27:26.278 } 00:27:26.278 ] 00:27:26.278 }' 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.278 [2024-11-08 17:15:02.859660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:26.278 [2024-11-08 17:15:02.872127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.278 17:15:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:26.278 [2024-11-08 17:15:02.874361] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:27.212 "name": "raid_bdev1", 00:27:27.212 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:27.212 "strip_size_kb": 0, 00:27:27.212 "state": "online", 00:27:27.212 "raid_level": "raid1", 00:27:27.212 "superblock": true, 00:27:27.212 "num_base_bdevs": 2, 00:27:27.212 "num_base_bdevs_discovered": 2, 00:27:27.212 "num_base_bdevs_operational": 2, 00:27:27.212 "process": { 00:27:27.212 "type": "rebuild", 00:27:27.212 "target": "spare", 00:27:27.212 "progress": { 00:27:27.212 "blocks": 2560, 00:27:27.212 "percent": 32 00:27:27.212 } 00:27:27.212 }, 00:27:27.212 "base_bdevs_list": [ 00:27:27.212 { 00:27:27.212 "name": "spare", 00:27:27.212 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:27.212 "is_configured": true, 00:27:27.212 "data_offset": 256, 00:27:27.212 "data_size": 7936 00:27:27.212 }, 00:27:27.212 { 00:27:27.212 "name": "BaseBdev2", 00:27:27.212 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:27.212 "is_configured": true, 00:27:27.212 "data_offset": 256, 00:27:27.212 "data_size": 7936 00:27:27.212 } 00:27:27.212 ] 00:27:27.212 }' 00:27:27.212 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:27.470 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=591 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:27.470 17:15:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.470 17:15:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:27.470 "name": "raid_bdev1", 00:27:27.470 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:27.470 "strip_size_kb": 0, 00:27:27.470 "state": "online", 00:27:27.470 "raid_level": "raid1", 00:27:27.470 "superblock": true, 00:27:27.470 "num_base_bdevs": 2, 00:27:27.470 "num_base_bdevs_discovered": 2, 00:27:27.470 "num_base_bdevs_operational": 2, 00:27:27.470 "process": { 00:27:27.470 "type": "rebuild", 00:27:27.470 "target": "spare", 00:27:27.470 "progress": { 00:27:27.470 "blocks": 2816, 00:27:27.470 "percent": 35 00:27:27.470 } 00:27:27.470 }, 00:27:27.470 "base_bdevs_list": [ 00:27:27.470 { 00:27:27.470 "name": "spare", 00:27:27.470 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:27.470 "is_configured": true, 00:27:27.470 "data_offset": 256, 00:27:27.470 "data_size": 7936 00:27:27.470 }, 00:27:27.470 { 00:27:27.470 "name": "BaseBdev2", 00:27:27.470 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:27.470 "is_configured": true, 00:27:27.470 "data_offset": 256, 00:27:27.470 "data_size": 7936 00:27:27.470 } 00:27:27.470 ] 00:27:27.470 }' 00:27:27.470 17:15:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:27.470 17:15:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:27.470 17:15:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:27.470 17:15:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:27.470 17:15:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:28.404 "name": "raid_bdev1", 00:27:28.404 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:28.404 "strip_size_kb": 0, 00:27:28.404 "state": "online", 00:27:28.404 "raid_level": "raid1", 00:27:28.404 "superblock": true, 00:27:28.404 "num_base_bdevs": 2, 00:27:28.404 "num_base_bdevs_discovered": 2, 00:27:28.404 "num_base_bdevs_operational": 2, 00:27:28.404 "process": { 00:27:28.404 "type": "rebuild", 00:27:28.404 "target": "spare", 00:27:28.404 "progress": { 00:27:28.404 "blocks": 5376, 00:27:28.404 "percent": 67 00:27:28.404 } 00:27:28.404 }, 00:27:28.404 "base_bdevs_list": [ 00:27:28.404 { 00:27:28.404 "name": "spare", 00:27:28.404 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:28.404 "is_configured": true, 00:27:28.404 "data_offset": 256, 00:27:28.404 "data_size": 7936 00:27:28.404 }, 00:27:28.404 { 00:27:28.404 "name": "BaseBdev2", 00:27:28.404 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:28.404 "is_configured": true, 00:27:28.404 "data_offset": 256, 00:27:28.404 "data_size": 7936 00:27:28.404 } 00:27:28.404 ] 00:27:28.404 }' 00:27:28.404 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:28.661 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.661 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:28.661 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.661 17:15:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:29.594 [2024-11-08 17:15:05.992767] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:29.594 [2024-11-08 17:15:05.992953] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:29.594 [2024-11-08 17:15:05.993051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:29.594 "name": "raid_bdev1", 00:27:29.594 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:29.594 "strip_size_kb": 0, 00:27:29.594 "state": "online", 00:27:29.594 "raid_level": "raid1", 00:27:29.594 "superblock": true, 00:27:29.594 "num_base_bdevs": 2, 00:27:29.594 "num_base_bdevs_discovered": 2, 00:27:29.594 "num_base_bdevs_operational": 2, 00:27:29.594 "base_bdevs_list": [ 00:27:29.594 { 00:27:29.594 "name": "spare", 00:27:29.594 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:29.594 "is_configured": true, 00:27:29.594 "data_offset": 256, 00:27:29.594 "data_size": 7936 00:27:29.594 }, 00:27:29.594 { 00:27:29.594 "name": "BaseBdev2", 00:27:29.594 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:29.594 "is_configured": true, 00:27:29.594 "data_offset": 256, 00:27:29.594 "data_size": 7936 00:27:29.594 } 00:27:29.594 ] 00:27:29.594 }' 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.594 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.852 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:29.852 "name": "raid_bdev1", 00:27:29.852 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:29.852 "strip_size_kb": 0, 00:27:29.852 "state": "online", 00:27:29.852 "raid_level": "raid1", 00:27:29.852 "superblock": true, 00:27:29.852 "num_base_bdevs": 2, 00:27:29.852 "num_base_bdevs_discovered": 2, 00:27:29.852 "num_base_bdevs_operational": 2, 00:27:29.852 "base_bdevs_list": [ 00:27:29.852 { 00:27:29.852 "name": "spare", 00:27:29.852 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:29.852 "is_configured": true, 00:27:29.852 "data_offset": 256, 00:27:29.852 "data_size": 7936 00:27:29.852 }, 00:27:29.852 { 00:27:29.852 "name": "BaseBdev2", 00:27:29.853 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:29.853 "is_configured": true, 00:27:29.853 "data_offset": 256, 00:27:29.853 "data_size": 7936 00:27:29.853 } 00:27:29.853 ] 00:27:29.853 }' 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.853 "name": "raid_bdev1", 00:27:29.853 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:29.853 "strip_size_kb": 0, 00:27:29.853 "state": "online", 00:27:29.853 "raid_level": "raid1", 00:27:29.853 "superblock": true, 00:27:29.853 "num_base_bdevs": 2, 00:27:29.853 "num_base_bdevs_discovered": 2, 00:27:29.853 "num_base_bdevs_operational": 2, 00:27:29.853 "base_bdevs_list": [ 00:27:29.853 { 00:27:29.853 "name": "spare", 00:27:29.853 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:29.853 "is_configured": true, 00:27:29.853 "data_offset": 256, 00:27:29.853 "data_size": 7936 00:27:29.853 }, 00:27:29.853 { 00:27:29.853 "name": "BaseBdev2", 00:27:29.853 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:29.853 "is_configured": true, 00:27:29.853 "data_offset": 256, 00:27:29.853 "data_size": 7936 00:27:29.853 } 00:27:29.853 ] 00:27:29.853 }' 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.853 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:30.111 [2024-11-08 17:15:06.708518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:30.111 [2024-11-08 17:15:06.708666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:30.111 [2024-11-08 17:15:06.708775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:30.111 [2024-11-08 17:15:06.708840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:30.111 [2024-11-08 17:15:06.708851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:30.111 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:30.369 /dev/nbd0 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:30.369 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:30.370 1+0 records in 00:27:30.370 1+0 records out 00:27:30.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145291 s, 28.2 MB/s 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:30.370 17:15:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:30.627 /dev/nbd1 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:30.627 1+0 records in 00:27:30.627 1+0 records out 00:27:30.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293657 s, 13.9 MB/s 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.627 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:30.628 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:30.885 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.144 [2024-11-08 17:15:07.777517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:31.144 [2024-11-08 17:15:07.777676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.144 [2024-11-08 17:15:07.777703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:31.144 [2024-11-08 17:15:07.777711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.144 [2024-11-08 17:15:07.779750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.144 [2024-11-08 17:15:07.779791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:31.144 [2024-11-08 17:15:07.779882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:31.144 [2024-11-08 17:15:07.779931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:31.144 [2024-11-08 17:15:07.780055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:31.144 spare 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.144 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.401 [2024-11-08 17:15:07.880146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:31.401 [2024-11-08 17:15:07.880324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:31.401 [2024-11-08 17:15:07.880664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:27:31.401 [2024-11-08 17:15:07.880871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:31.401 [2024-11-08 17:15:07.880881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:31.401 [2024-11-08 17:15:07.881055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.401 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.401 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:31.401 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.401 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:31.401 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:31.401 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:31.401 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.402 "name": "raid_bdev1", 00:27:31.402 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:31.402 "strip_size_kb": 0, 00:27:31.402 "state": "online", 00:27:31.402 "raid_level": "raid1", 00:27:31.402 "superblock": true, 00:27:31.402 "num_base_bdevs": 2, 00:27:31.402 "num_base_bdevs_discovered": 2, 00:27:31.402 "num_base_bdevs_operational": 2, 00:27:31.402 "base_bdevs_list": [ 00:27:31.402 { 00:27:31.402 "name": "spare", 00:27:31.402 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:31.402 "is_configured": true, 00:27:31.402 "data_offset": 256, 00:27:31.402 "data_size": 7936 00:27:31.402 }, 00:27:31.402 { 00:27:31.402 "name": "BaseBdev2", 00:27:31.402 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:31.402 "is_configured": true, 00:27:31.402 "data_offset": 256, 00:27:31.402 "data_size": 7936 00:27:31.402 } 00:27:31.402 ] 00:27:31.402 }' 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.402 17:15:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:31.660 "name": "raid_bdev1", 00:27:31.660 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:31.660 "strip_size_kb": 0, 00:27:31.660 "state": "online", 00:27:31.660 "raid_level": "raid1", 00:27:31.660 "superblock": true, 00:27:31.660 "num_base_bdevs": 2, 00:27:31.660 "num_base_bdevs_discovered": 2, 00:27:31.660 "num_base_bdevs_operational": 2, 00:27:31.660 "base_bdevs_list": [ 00:27:31.660 { 00:27:31.660 "name": "spare", 00:27:31.660 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:31.660 "is_configured": true, 00:27:31.660 "data_offset": 256, 00:27:31.660 "data_size": 7936 00:27:31.660 }, 00:27:31.660 { 00:27:31.660 "name": "BaseBdev2", 00:27:31.660 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:31.660 "is_configured": true, 00:27:31.660 "data_offset": 256, 00:27:31.660 "data_size": 7936 00:27:31.660 } 00:27:31.660 ] 00:27:31.660 }' 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.660 [2024-11-08 17:15:08.345690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:31.660 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.661 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.918 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.918 "name": "raid_bdev1", 00:27:31.918 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:31.918 "strip_size_kb": 0, 00:27:31.918 "state": "online", 00:27:31.918 "raid_level": "raid1", 00:27:31.918 "superblock": true, 00:27:31.918 "num_base_bdevs": 2, 00:27:31.918 "num_base_bdevs_discovered": 1, 00:27:31.918 "num_base_bdevs_operational": 1, 00:27:31.919 "base_bdevs_list": [ 00:27:31.919 { 00:27:31.919 "name": null, 00:27:31.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.919 "is_configured": false, 00:27:31.919 "data_offset": 0, 00:27:31.919 "data_size": 7936 00:27:31.919 }, 00:27:31.919 { 00:27:31.919 "name": "BaseBdev2", 00:27:31.919 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:31.919 "is_configured": true, 00:27:31.919 "data_offset": 256, 00:27:31.919 "data_size": 7936 00:27:31.919 } 00:27:31.919 ] 00:27:31.919 }' 00:27:31.919 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.919 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:32.177 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:32.177 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.177 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:32.177 [2024-11-08 17:15:08.685797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:32.177 [2024-11-08 17:15:08.685989] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:32.177 [2024-11-08 17:15:08.686004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:32.177 [2024-11-08 17:15:08.686037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:32.177 [2024-11-08 17:15:08.696150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:27:32.177 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.177 17:15:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:32.177 [2024-11-08 17:15:08.697867] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:33.113 "name": "raid_bdev1", 00:27:33.113 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:33.113 "strip_size_kb": 0, 00:27:33.113 "state": "online", 00:27:33.113 "raid_level": "raid1", 00:27:33.113 "superblock": true, 00:27:33.113 "num_base_bdevs": 2, 00:27:33.113 "num_base_bdevs_discovered": 2, 00:27:33.113 "num_base_bdevs_operational": 2, 00:27:33.113 "process": { 00:27:33.113 "type": "rebuild", 00:27:33.113 "target": "spare", 00:27:33.113 "progress": { 00:27:33.113 "blocks": 2560, 00:27:33.113 "percent": 32 00:27:33.113 } 00:27:33.113 }, 00:27:33.113 "base_bdevs_list": [ 00:27:33.113 { 00:27:33.113 "name": "spare", 00:27:33.113 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:33.113 "is_configured": true, 00:27:33.113 "data_offset": 256, 00:27:33.113 "data_size": 7936 00:27:33.113 }, 00:27:33.113 { 00:27:33.113 "name": "BaseBdev2", 00:27:33.113 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:33.113 "is_configured": true, 00:27:33.113 "data_offset": 256, 00:27:33.113 "data_size": 7936 00:27:33.113 } 00:27:33.113 ] 00:27:33.113 }' 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.113 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.113 [2024-11-08 17:15:09.807877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:33.373 [2024-11-08 17:15:09.904958] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:33.373 [2024-11-08 17:15:09.905041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.373 [2024-11-08 17:15:09.905055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:33.373 [2024-11-08 17:15:09.905063] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.373 "name": "raid_bdev1", 00:27:33.373 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:33.373 "strip_size_kb": 0, 00:27:33.373 "state": "online", 00:27:33.373 "raid_level": "raid1", 00:27:33.373 "superblock": true, 00:27:33.373 "num_base_bdevs": 2, 00:27:33.373 "num_base_bdevs_discovered": 1, 00:27:33.373 "num_base_bdevs_operational": 1, 00:27:33.373 "base_bdevs_list": [ 00:27:33.373 { 00:27:33.373 "name": null, 00:27:33.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.373 "is_configured": false, 00:27:33.373 "data_offset": 0, 00:27:33.373 "data_size": 7936 00:27:33.373 }, 00:27:33.373 { 00:27:33.373 "name": "BaseBdev2", 00:27:33.373 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:33.373 "is_configured": true, 00:27:33.373 "data_offset": 256, 00:27:33.373 "data_size": 7936 00:27:33.373 } 00:27:33.373 ] 00:27:33.373 }' 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.373 17:15:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.631 17:15:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:33.631 17:15:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.631 17:15:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.631 [2024-11-08 17:15:10.256564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:33.631 [2024-11-08 17:15:10.256634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.631 [2024-11-08 17:15:10.256655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:33.631 [2024-11-08 17:15:10.256666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.631 [2024-11-08 17:15:10.257123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.631 [2024-11-08 17:15:10.257143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:33.631 [2024-11-08 17:15:10.257231] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:33.631 [2024-11-08 17:15:10.257249] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:33.631 [2024-11-08 17:15:10.257260] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:33.631 [2024-11-08 17:15:10.257280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:33.631 [2024-11-08 17:15:10.266657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:27:33.631 spare 00:27:33.631 17:15:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.631 17:15:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:33.631 [2024-11-08 17:15:10.268319] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.565 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:34.824 "name": "raid_bdev1", 00:27:34.824 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:34.824 "strip_size_kb": 0, 00:27:34.824 "state": "online", 00:27:34.824 "raid_level": "raid1", 00:27:34.824 "superblock": true, 00:27:34.824 "num_base_bdevs": 2, 00:27:34.824 "num_base_bdevs_discovered": 2, 00:27:34.824 "num_base_bdevs_operational": 2, 00:27:34.824 "process": { 00:27:34.824 "type": "rebuild", 00:27:34.824 "target": "spare", 00:27:34.824 "progress": { 00:27:34.824 "blocks": 2560, 00:27:34.824 "percent": 32 00:27:34.824 } 00:27:34.824 }, 00:27:34.824 "base_bdevs_list": [ 00:27:34.824 { 00:27:34.824 "name": "spare", 00:27:34.824 "uuid": "043920a6-80c5-5611-9a07-7d5ec7ac0e47", 00:27:34.824 "is_configured": true, 00:27:34.824 "data_offset": 256, 00:27:34.824 "data_size": 7936 00:27:34.824 }, 00:27:34.824 { 00:27:34.824 "name": "BaseBdev2", 00:27:34.824 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:34.824 "is_configured": true, 00:27:34.824 "data_offset": 256, 00:27:34.824 "data_size": 7936 00:27:34.824 } 00:27:34.824 ] 00:27:34.824 }' 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.824 [2024-11-08 17:15:11.370928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:34.824 [2024-11-08 17:15:11.374865] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:34.824 [2024-11-08 17:15:11.374918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.824 [2024-11-08 17:15:11.374933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:34.824 [2024-11-08 17:15:11.374939] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:34.824 "name": "raid_bdev1", 00:27:34.824 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:34.824 "strip_size_kb": 0, 00:27:34.824 "state": "online", 00:27:34.824 "raid_level": "raid1", 00:27:34.824 "superblock": true, 00:27:34.824 "num_base_bdevs": 2, 00:27:34.824 "num_base_bdevs_discovered": 1, 00:27:34.824 "num_base_bdevs_operational": 1, 00:27:34.824 "base_bdevs_list": [ 00:27:34.824 { 00:27:34.824 "name": null, 00:27:34.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.824 "is_configured": false, 00:27:34.824 "data_offset": 0, 00:27:34.824 "data_size": 7936 00:27:34.824 }, 00:27:34.824 { 00:27:34.824 "name": "BaseBdev2", 00:27:34.824 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:34.824 "is_configured": true, 00:27:34.824 "data_offset": 256, 00:27:34.824 "data_size": 7936 00:27:34.824 } 00:27:34.824 ] 00:27:34.824 }' 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:34.824 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:35.083 "name": "raid_bdev1", 00:27:35.083 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:35.083 "strip_size_kb": 0, 00:27:35.083 "state": "online", 00:27:35.083 "raid_level": "raid1", 00:27:35.083 "superblock": true, 00:27:35.083 "num_base_bdevs": 2, 00:27:35.083 "num_base_bdevs_discovered": 1, 00:27:35.083 "num_base_bdevs_operational": 1, 00:27:35.083 "base_bdevs_list": [ 00:27:35.083 { 00:27:35.083 "name": null, 00:27:35.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.083 "is_configured": false, 00:27:35.083 "data_offset": 0, 00:27:35.083 "data_size": 7936 00:27:35.083 }, 00:27:35.083 { 00:27:35.083 "name": "BaseBdev2", 00:27:35.083 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:35.083 "is_configured": true, 00:27:35.083 "data_offset": 256, 00:27:35.083 "data_size": 7936 00:27:35.083 } 00:27:35.083 ] 00:27:35.083 }' 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:35.083 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.341 [2024-11-08 17:15:11.830304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:35.341 [2024-11-08 17:15:11.830485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.341 [2024-11-08 17:15:11.830519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:35.341 [2024-11-08 17:15:11.830529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.341 [2024-11-08 17:15:11.831022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.341 [2024-11-08 17:15:11.831039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:35.341 [2024-11-08 17:15:11.831121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:35.341 [2024-11-08 17:15:11.831134] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:35.341 [2024-11-08 17:15:11.831143] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:35.341 [2024-11-08 17:15:11.831153] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:35.341 BaseBdev1 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.341 17:15:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:36.275 "name": "raid_bdev1", 00:27:36.275 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:36.275 "strip_size_kb": 0, 00:27:36.275 "state": "online", 00:27:36.275 "raid_level": "raid1", 00:27:36.275 "superblock": true, 00:27:36.275 "num_base_bdevs": 2, 00:27:36.275 "num_base_bdevs_discovered": 1, 00:27:36.275 "num_base_bdevs_operational": 1, 00:27:36.275 "base_bdevs_list": [ 00:27:36.275 { 00:27:36.275 "name": null, 00:27:36.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.275 "is_configured": false, 00:27:36.275 "data_offset": 0, 00:27:36.275 "data_size": 7936 00:27:36.275 }, 00:27:36.275 { 00:27:36.275 "name": "BaseBdev2", 00:27:36.275 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:36.275 "is_configured": true, 00:27:36.275 "data_offset": 256, 00:27:36.275 "data_size": 7936 00:27:36.275 } 00:27:36.275 ] 00:27:36.275 }' 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:36.275 17:15:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.533 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:36.533 "name": "raid_bdev1", 00:27:36.533 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:36.533 "strip_size_kb": 0, 00:27:36.533 "state": "online", 00:27:36.533 "raid_level": "raid1", 00:27:36.533 "superblock": true, 00:27:36.533 "num_base_bdevs": 2, 00:27:36.533 "num_base_bdevs_discovered": 1, 00:27:36.533 "num_base_bdevs_operational": 1, 00:27:36.533 "base_bdevs_list": [ 00:27:36.533 { 00:27:36.533 "name": null, 00:27:36.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.533 "is_configured": false, 00:27:36.533 "data_offset": 0, 00:27:36.533 "data_size": 7936 00:27:36.533 }, 00:27:36.534 { 00:27:36.534 "name": "BaseBdev2", 00:27:36.534 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:36.534 "is_configured": true, 00:27:36.534 "data_offset": 256, 00:27:36.534 "data_size": 7936 00:27:36.534 } 00:27:36.534 ] 00:27:36.534 }' 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.534 [2024-11-08 17:15:13.234648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:36.534 [2024-11-08 17:15:13.234841] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:36.534 [2024-11-08 17:15:13.234855] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:36.534 request: 00:27:36.534 { 00:27:36.534 "base_bdev": "BaseBdev1", 00:27:36.534 "raid_bdev": "raid_bdev1", 00:27:36.534 "method": "bdev_raid_add_base_bdev", 00:27:36.534 "req_id": 1 00:27:36.534 } 00:27:36.534 Got JSON-RPC error response 00:27:36.534 response: 00:27:36.534 { 00:27:36.534 "code": -22, 00:27:36.534 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:36.534 } 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:36.534 17:15:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:37.905 "name": "raid_bdev1", 00:27:37.905 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:37.905 "strip_size_kb": 0, 00:27:37.905 "state": "online", 00:27:37.905 "raid_level": "raid1", 00:27:37.905 "superblock": true, 00:27:37.905 "num_base_bdevs": 2, 00:27:37.905 "num_base_bdevs_discovered": 1, 00:27:37.905 "num_base_bdevs_operational": 1, 00:27:37.905 "base_bdevs_list": [ 00:27:37.905 { 00:27:37.905 "name": null, 00:27:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.905 "is_configured": false, 00:27:37.905 "data_offset": 0, 00:27:37.905 "data_size": 7936 00:27:37.905 }, 00:27:37.905 { 00:27:37.905 "name": "BaseBdev2", 00:27:37.905 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:37.905 "is_configured": true, 00:27:37.905 "data_offset": 256, 00:27:37.905 "data_size": 7936 00:27:37.905 } 00:27:37.905 ] 00:27:37.905 }' 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:37.905 "name": "raid_bdev1", 00:27:37.905 "uuid": "a3b82bf6-65bd-4d18-806b-9bbe7e6cac1d", 00:27:37.905 "strip_size_kb": 0, 00:27:37.905 "state": "online", 00:27:37.905 "raid_level": "raid1", 00:27:37.905 "superblock": true, 00:27:37.905 "num_base_bdevs": 2, 00:27:37.905 "num_base_bdevs_discovered": 1, 00:27:37.905 "num_base_bdevs_operational": 1, 00:27:37.905 "base_bdevs_list": [ 00:27:37.905 { 00:27:37.905 "name": null, 00:27:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.905 "is_configured": false, 00:27:37.905 "data_offset": 0, 00:27:37.905 "data_size": 7936 00:27:37.905 }, 00:27:37.905 { 00:27:37.905 "name": "BaseBdev2", 00:27:37.905 "uuid": "44098db2-ad2c-5cac-a9ac-48b54da06c71", 00:27:37.905 "is_configured": true, 00:27:37.905 "data_offset": 256, 00:27:37.905 "data_size": 7936 00:27:37.905 } 00:27:37.905 ] 00:27:37.905 }' 00:27:37.905 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 84724 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 84724 ']' 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 84724 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84724 00:27:38.163 killing process with pid 84724 00:27:38.163 Received shutdown signal, test time was about 60.000000 seconds 00:27:38.163 00:27:38.163 Latency(us) 00:27:38.163 [2024-11-08T17:15:14.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.163 [2024-11-08T17:15:14.878Z] =================================================================================================================== 00:27:38.163 [2024-11-08T17:15:14.878Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84724' 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 84724 00:27:38.163 [2024-11-08 17:15:14.687865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:38.163 17:15:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 84724 00:27:38.163 [2024-11-08 17:15:14.687988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:38.163 [2024-11-08 17:15:14.688038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:38.163 [2024-11-08 17:15:14.688048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:38.163 [2024-11-08 17:15:14.844271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:39.098 17:15:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:27:39.098 00:27:39.098 real 0m17.197s 00:27:39.098 user 0m21.856s 00:27:39.098 sys 0m1.985s 00:27:39.098 17:15:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:39.098 ************************************ 00:27:39.098 END TEST raid_rebuild_test_sb_4k 00:27:39.098 ************************************ 00:27:39.098 17:15:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:39.098 17:15:15 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:27:39.098 17:15:15 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:39.098 17:15:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:39.098 17:15:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:39.098 17:15:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:39.098 ************************************ 00:27:39.098 START TEST raid_state_function_test_sb_md_separate 00:27:39.098 ************************************ 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:39.098 Process raid pid: 85393 00:27:39.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=85393 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85393' 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 85393 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 85393 ']' 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.098 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:39.099 17:15:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:39.099 [2024-11-08 17:15:15.547864] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:27:39.099 [2024-11-08 17:15:15.548173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.099 [2024-11-08 17:15:15.707335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.357 [2024-11-08 17:15:15.828567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.357 [2024-11-08 17:15:15.978190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:39.357 [2024-11-08 17:15:15.978344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:39.922 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:39.923 [2024-11-08 17:15:16.421993] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:39.923 [2024-11-08 17:15:16.422160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:39.923 [2024-11-08 17:15:16.422228] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:39.923 [2024-11-08 17:15:16.422257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:39.923 "name": "Existed_Raid", 00:27:39.923 "uuid": "f198ba41-b280-4524-a4a4-ff1bb815461e", 00:27:39.923 "strip_size_kb": 0, 00:27:39.923 "state": "configuring", 00:27:39.923 "raid_level": "raid1", 00:27:39.923 "superblock": true, 00:27:39.923 "num_base_bdevs": 2, 00:27:39.923 "num_base_bdevs_discovered": 0, 00:27:39.923 "num_base_bdevs_operational": 2, 00:27:39.923 "base_bdevs_list": [ 00:27:39.923 { 00:27:39.923 "name": "BaseBdev1", 00:27:39.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.923 "is_configured": false, 00:27:39.923 "data_offset": 0, 00:27:39.923 "data_size": 0 00:27:39.923 }, 00:27:39.923 { 00:27:39.923 "name": "BaseBdev2", 00:27:39.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.923 "is_configured": false, 00:27:39.923 "data_offset": 0, 00:27:39.923 "data_size": 0 00:27:39.923 } 00:27:39.923 ] 00:27:39.923 }' 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:39.923 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.181 [2024-11-08 17:15:16.754047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:40.181 [2024-11-08 17:15:16.754087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.181 [2024-11-08 17:15:16.762009] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:40.181 [2024-11-08 17:15:16.762052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:40.181 [2024-11-08 17:15:16.762062] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:40.181 [2024-11-08 17:15:16.762073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:40.181 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.182 [2024-11-08 17:15:16.797398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.182 BaseBdev1 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.182 [ 00:27:40.182 { 00:27:40.182 "name": "BaseBdev1", 00:27:40.182 "aliases": [ 00:27:40.182 "50ecfcb8-8f34-428a-8a30-1fc99a284115" 00:27:40.182 ], 00:27:40.182 "product_name": "Malloc disk", 00:27:40.182 "block_size": 4096, 00:27:40.182 "num_blocks": 8192, 00:27:40.182 "uuid": "50ecfcb8-8f34-428a-8a30-1fc99a284115", 00:27:40.182 "md_size": 32, 00:27:40.182 "md_interleave": false, 00:27:40.182 "dif_type": 0, 00:27:40.182 "assigned_rate_limits": { 00:27:40.182 "rw_ios_per_sec": 0, 00:27:40.182 "rw_mbytes_per_sec": 0, 00:27:40.182 "r_mbytes_per_sec": 0, 00:27:40.182 "w_mbytes_per_sec": 0 00:27:40.182 }, 00:27:40.182 "claimed": true, 00:27:40.182 "claim_type": "exclusive_write", 00:27:40.182 "zoned": false, 00:27:40.182 "supported_io_types": { 00:27:40.182 "read": true, 00:27:40.182 "write": true, 00:27:40.182 "unmap": true, 00:27:40.182 "flush": true, 00:27:40.182 "reset": true, 00:27:40.182 "nvme_admin": false, 00:27:40.182 "nvme_io": false, 00:27:40.182 "nvme_io_md": false, 00:27:40.182 "write_zeroes": true, 00:27:40.182 "zcopy": true, 00:27:40.182 "get_zone_info": false, 00:27:40.182 "zone_management": false, 00:27:40.182 "zone_append": false, 00:27:40.182 "compare": false, 00:27:40.182 "compare_and_write": false, 00:27:40.182 "abort": true, 00:27:40.182 "seek_hole": false, 00:27:40.182 "seek_data": false, 00:27:40.182 "copy": true, 00:27:40.182 "nvme_iov_md": false 00:27:40.182 }, 00:27:40.182 "memory_domains": [ 00:27:40.182 { 00:27:40.182 "dma_device_id": "system", 00:27:40.182 "dma_device_type": 1 00:27:40.182 }, 00:27:40.182 { 00:27:40.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.182 "dma_device_type": 2 00:27:40.182 } 00:27:40.182 ], 00:27:40.182 "driver_specific": {} 00:27:40.182 } 00:27:40.182 ] 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.182 "name": "Existed_Raid", 00:27:40.182 "uuid": "f99f65fa-a19d-4b75-bbc6-428d20db6769", 00:27:40.182 "strip_size_kb": 0, 00:27:40.182 "state": "configuring", 00:27:40.182 "raid_level": "raid1", 00:27:40.182 "superblock": true, 00:27:40.182 "num_base_bdevs": 2, 00:27:40.182 "num_base_bdevs_discovered": 1, 00:27:40.182 "num_base_bdevs_operational": 2, 00:27:40.182 "base_bdevs_list": [ 00:27:40.182 { 00:27:40.182 "name": "BaseBdev1", 00:27:40.182 "uuid": "50ecfcb8-8f34-428a-8a30-1fc99a284115", 00:27:40.182 "is_configured": true, 00:27:40.182 "data_offset": 256, 00:27:40.182 "data_size": 7936 00:27:40.182 }, 00:27:40.182 { 00:27:40.182 "name": "BaseBdev2", 00:27:40.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.182 "is_configured": false, 00:27:40.182 "data_offset": 0, 00:27:40.182 "data_size": 0 00:27:40.182 } 00:27:40.182 ] 00:27:40.182 }' 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.182 17:15:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.440 [2024-11-08 17:15:17.137557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:40.440 [2024-11-08 17:15:17.137720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.440 [2024-11-08 17:15:17.145595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.440 [2024-11-08 17:15:17.147706] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:40.440 [2024-11-08 17:15:17.147845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.440 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.698 "name": "Existed_Raid", 00:27:40.698 "uuid": "7b13e43e-0e11-4e2d-88f3-143f6238fbeb", 00:27:40.698 "strip_size_kb": 0, 00:27:40.698 "state": "configuring", 00:27:40.698 "raid_level": "raid1", 00:27:40.698 "superblock": true, 00:27:40.698 "num_base_bdevs": 2, 00:27:40.698 "num_base_bdevs_discovered": 1, 00:27:40.698 "num_base_bdevs_operational": 2, 00:27:40.698 "base_bdevs_list": [ 00:27:40.698 { 00:27:40.698 "name": "BaseBdev1", 00:27:40.698 "uuid": "50ecfcb8-8f34-428a-8a30-1fc99a284115", 00:27:40.698 "is_configured": true, 00:27:40.698 "data_offset": 256, 00:27:40.698 "data_size": 7936 00:27:40.698 }, 00:27:40.698 { 00:27:40.698 "name": "BaseBdev2", 00:27:40.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.698 "is_configured": false, 00:27:40.698 "data_offset": 0, 00:27:40.698 "data_size": 0 00:27:40.698 } 00:27:40.698 ] 00:27:40.698 }' 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.698 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.956 [2024-11-08 17:15:17.498846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:40.956 [2024-11-08 17:15:17.499057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:40.956 [2024-11-08 17:15:17.499071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:40.956 [2024-11-08 17:15:17.499155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:40.956 [2024-11-08 17:15:17.499268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:40.956 [2024-11-08 17:15:17.499279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:40.956 [2024-11-08 17:15:17.499362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.956 BaseBdev2 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.956 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.956 [ 00:27:40.956 { 00:27:40.956 "name": "BaseBdev2", 00:27:40.956 "aliases": [ 00:27:40.956 "e2ff1ac6-3a76-4ec8-8314-a955443200e2" 00:27:40.956 ], 00:27:40.956 "product_name": "Malloc disk", 00:27:40.956 "block_size": 4096, 00:27:40.956 "num_blocks": 8192, 00:27:40.956 "uuid": "e2ff1ac6-3a76-4ec8-8314-a955443200e2", 00:27:40.956 "md_size": 32, 00:27:40.956 "md_interleave": false, 00:27:40.956 "dif_type": 0, 00:27:40.956 "assigned_rate_limits": { 00:27:40.956 "rw_ios_per_sec": 0, 00:27:40.956 "rw_mbytes_per_sec": 0, 00:27:40.956 "r_mbytes_per_sec": 0, 00:27:40.956 "w_mbytes_per_sec": 0 00:27:40.956 }, 00:27:40.956 "claimed": true, 00:27:40.956 "claim_type": "exclusive_write", 00:27:40.956 "zoned": false, 00:27:40.956 "supported_io_types": { 00:27:40.956 "read": true, 00:27:40.956 "write": true, 00:27:40.956 "unmap": true, 00:27:40.956 "flush": true, 00:27:40.956 "reset": true, 00:27:40.956 "nvme_admin": false, 00:27:40.956 "nvme_io": false, 00:27:40.956 "nvme_io_md": false, 00:27:40.957 "write_zeroes": true, 00:27:40.957 "zcopy": true, 00:27:40.957 "get_zone_info": false, 00:27:40.957 "zone_management": false, 00:27:40.957 "zone_append": false, 00:27:40.957 "compare": false, 00:27:40.957 "compare_and_write": false, 00:27:40.957 "abort": true, 00:27:40.957 "seek_hole": false, 00:27:40.957 "seek_data": false, 00:27:40.957 "copy": true, 00:27:40.957 "nvme_iov_md": false 00:27:40.957 }, 00:27:40.957 "memory_domains": [ 00:27:40.957 { 00:27:40.957 "dma_device_id": "system", 00:27:40.957 "dma_device_type": 1 00:27:40.957 }, 00:27:40.957 { 00:27:40.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.957 "dma_device_type": 2 00:27:40.957 } 00:27:40.957 ], 00:27:40.957 "driver_specific": {} 00:27:40.957 } 00:27:40.957 ] 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.957 "name": "Existed_Raid", 00:27:40.957 "uuid": "7b13e43e-0e11-4e2d-88f3-143f6238fbeb", 00:27:40.957 "strip_size_kb": 0, 00:27:40.957 "state": "online", 00:27:40.957 "raid_level": "raid1", 00:27:40.957 "superblock": true, 00:27:40.957 "num_base_bdevs": 2, 00:27:40.957 "num_base_bdevs_discovered": 2, 00:27:40.957 "num_base_bdevs_operational": 2, 00:27:40.957 "base_bdevs_list": [ 00:27:40.957 { 00:27:40.957 "name": "BaseBdev1", 00:27:40.957 "uuid": "50ecfcb8-8f34-428a-8a30-1fc99a284115", 00:27:40.957 "is_configured": true, 00:27:40.957 "data_offset": 256, 00:27:40.957 "data_size": 7936 00:27:40.957 }, 00:27:40.957 { 00:27:40.957 "name": "BaseBdev2", 00:27:40.957 "uuid": "e2ff1ac6-3a76-4ec8-8314-a955443200e2", 00:27:40.957 "is_configured": true, 00:27:40.957 "data_offset": 256, 00:27:40.957 "data_size": 7936 00:27:40.957 } 00:27:40.957 ] 00:27:40.957 }' 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.957 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:41.215 [2024-11-08 17:15:17.855363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:41.215 "name": "Existed_Raid", 00:27:41.215 "aliases": [ 00:27:41.215 "7b13e43e-0e11-4e2d-88f3-143f6238fbeb" 00:27:41.215 ], 00:27:41.215 "product_name": "Raid Volume", 00:27:41.215 "block_size": 4096, 00:27:41.215 "num_blocks": 7936, 00:27:41.215 "uuid": "7b13e43e-0e11-4e2d-88f3-143f6238fbeb", 00:27:41.215 "md_size": 32, 00:27:41.215 "md_interleave": false, 00:27:41.215 "dif_type": 0, 00:27:41.215 "assigned_rate_limits": { 00:27:41.215 "rw_ios_per_sec": 0, 00:27:41.215 "rw_mbytes_per_sec": 0, 00:27:41.215 "r_mbytes_per_sec": 0, 00:27:41.215 "w_mbytes_per_sec": 0 00:27:41.215 }, 00:27:41.215 "claimed": false, 00:27:41.215 "zoned": false, 00:27:41.215 "supported_io_types": { 00:27:41.215 "read": true, 00:27:41.215 "write": true, 00:27:41.215 "unmap": false, 00:27:41.215 "flush": false, 00:27:41.215 "reset": true, 00:27:41.215 "nvme_admin": false, 00:27:41.215 "nvme_io": false, 00:27:41.215 "nvme_io_md": false, 00:27:41.215 "write_zeroes": true, 00:27:41.215 "zcopy": false, 00:27:41.215 "get_zone_info": false, 00:27:41.215 "zone_management": false, 00:27:41.215 "zone_append": false, 00:27:41.215 "compare": false, 00:27:41.215 "compare_and_write": false, 00:27:41.215 "abort": false, 00:27:41.215 "seek_hole": false, 00:27:41.215 "seek_data": false, 00:27:41.215 "copy": false, 00:27:41.215 "nvme_iov_md": false 00:27:41.215 }, 00:27:41.215 "memory_domains": [ 00:27:41.215 { 00:27:41.215 "dma_device_id": "system", 00:27:41.215 "dma_device_type": 1 00:27:41.215 }, 00:27:41.215 { 00:27:41.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.215 "dma_device_type": 2 00:27:41.215 }, 00:27:41.215 { 00:27:41.215 "dma_device_id": "system", 00:27:41.215 "dma_device_type": 1 00:27:41.215 }, 00:27:41.215 { 00:27:41.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.215 "dma_device_type": 2 00:27:41.215 } 00:27:41.215 ], 00:27:41.215 "driver_specific": { 00:27:41.215 "raid": { 00:27:41.215 "uuid": "7b13e43e-0e11-4e2d-88f3-143f6238fbeb", 00:27:41.215 "strip_size_kb": 0, 00:27:41.215 "state": "online", 00:27:41.215 "raid_level": "raid1", 00:27:41.215 "superblock": true, 00:27:41.215 "num_base_bdevs": 2, 00:27:41.215 "num_base_bdevs_discovered": 2, 00:27:41.215 "num_base_bdevs_operational": 2, 00:27:41.215 "base_bdevs_list": [ 00:27:41.215 { 00:27:41.215 "name": "BaseBdev1", 00:27:41.215 "uuid": "50ecfcb8-8f34-428a-8a30-1fc99a284115", 00:27:41.215 "is_configured": true, 00:27:41.215 "data_offset": 256, 00:27:41.215 "data_size": 7936 00:27:41.215 }, 00:27:41.215 { 00:27:41.215 "name": "BaseBdev2", 00:27:41.215 "uuid": "e2ff1ac6-3a76-4ec8-8314-a955443200e2", 00:27:41.215 "is_configured": true, 00:27:41.215 "data_offset": 256, 00:27:41.215 "data_size": 7936 00:27:41.215 } 00:27:41.215 ] 00:27:41.215 } 00:27:41.215 } 00:27:41.215 }' 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:41.215 BaseBdev2' 00:27:41.215 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:41.481 17:15:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.481 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:41.481 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:41.481 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:41.481 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.481 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.481 [2024-11-08 17:15:18.007066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:41.481 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.482 "name": "Existed_Raid", 00:27:41.482 "uuid": "7b13e43e-0e11-4e2d-88f3-143f6238fbeb", 00:27:41.482 "strip_size_kb": 0, 00:27:41.482 "state": "online", 00:27:41.482 "raid_level": "raid1", 00:27:41.482 "superblock": true, 00:27:41.482 "num_base_bdevs": 2, 00:27:41.482 "num_base_bdevs_discovered": 1, 00:27:41.482 "num_base_bdevs_operational": 1, 00:27:41.482 "base_bdevs_list": [ 00:27:41.482 { 00:27:41.482 "name": null, 00:27:41.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.482 "is_configured": false, 00:27:41.482 "data_offset": 0, 00:27:41.482 "data_size": 7936 00:27:41.482 }, 00:27:41.482 { 00:27:41.482 "name": "BaseBdev2", 00:27:41.482 "uuid": "e2ff1ac6-3a76-4ec8-8314-a955443200e2", 00:27:41.482 "is_configured": true, 00:27:41.482 "data_offset": 256, 00:27:41.482 "data_size": 7936 00:27:41.482 } 00:27:41.482 ] 00:27:41.482 }' 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.482 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.762 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.762 [2024-11-08 17:15:18.446946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:41.762 [2024-11-08 17:15:18.447058] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:42.022 [2024-11-08 17:15:18.513677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:42.022 [2024-11-08 17:15:18.513728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:42.022 [2024-11-08 17:15:18.513745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 85393 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 85393 ']' 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 85393 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85393 00:27:42.022 killing process with pid 85393 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85393' 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 85393 00:27:42.022 [2024-11-08 17:15:18.565662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:42.022 17:15:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 85393 00:27:42.022 [2024-11-08 17:15:18.576638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:42.955 17:15:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:27:42.955 00:27:42.955 real 0m3.842s 00:27:42.955 user 0m5.494s 00:27:42.955 sys 0m0.619s 00:27:42.955 17:15:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:42.955 ************************************ 00:27:42.955 END TEST raid_state_function_test_sb_md_separate 00:27:42.955 ************************************ 00:27:42.955 17:15:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.955 17:15:19 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:42.955 17:15:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:42.955 17:15:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:42.955 17:15:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:42.955 ************************************ 00:27:42.955 START TEST raid_superblock_test_md_separate 00:27:42.955 ************************************ 00:27:42.955 17:15:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=85629 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 85629 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 85629 ']' 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:42.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.956 17:15:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:42.956 [2024-11-08 17:15:19.439097] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:27:42.956 [2024-11-08 17:15:19.439233] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85629 ] 00:27:42.956 [2024-11-08 17:15:19.598401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.213 [2024-11-08 17:15:19.711884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.213 [2024-11-08 17:15:19.858495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:43.213 [2024-11-08 17:15:19.858544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.783 malloc1 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.783 [2024-11-08 17:15:20.319164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:43.783 [2024-11-08 17:15:20.319226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.783 [2024-11-08 17:15:20.319250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:43.783 [2024-11-08 17:15:20.319261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.783 [2024-11-08 17:15:20.321265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.783 [2024-11-08 17:15:20.321302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:43.783 pt1 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.783 malloc2 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.783 [2024-11-08 17:15:20.358229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:43.783 [2024-11-08 17:15:20.358282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.783 [2024-11-08 17:15:20.358304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:43.783 [2024-11-08 17:15:20.358314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.783 [2024-11-08 17:15:20.360326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.783 [2024-11-08 17:15:20.360359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:43.783 pt2 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.783 [2024-11-08 17:15:20.366255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:43.783 [2024-11-08 17:15:20.368185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:43.783 [2024-11-08 17:15:20.368355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:43.783 [2024-11-08 17:15:20.368375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:43.783 [2024-11-08 17:15:20.368454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:43.783 [2024-11-08 17:15:20.368579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:43.783 [2024-11-08 17:15:20.368598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:43.783 [2024-11-08 17:15:20.368692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:43.783 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.784 "name": "raid_bdev1", 00:27:43.784 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:43.784 "strip_size_kb": 0, 00:27:43.784 "state": "online", 00:27:43.784 "raid_level": "raid1", 00:27:43.784 "superblock": true, 00:27:43.784 "num_base_bdevs": 2, 00:27:43.784 "num_base_bdevs_discovered": 2, 00:27:43.784 "num_base_bdevs_operational": 2, 00:27:43.784 "base_bdevs_list": [ 00:27:43.784 { 00:27:43.784 "name": "pt1", 00:27:43.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:43.784 "is_configured": true, 00:27:43.784 "data_offset": 256, 00:27:43.784 "data_size": 7936 00:27:43.784 }, 00:27:43.784 { 00:27:43.784 "name": "pt2", 00:27:43.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:43.784 "is_configured": true, 00:27:43.784 "data_offset": 256, 00:27:43.784 "data_size": 7936 00:27:43.784 } 00:27:43.784 ] 00:27:43.784 }' 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.784 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.045 [2024-11-08 17:15:20.682671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.045 "name": "raid_bdev1", 00:27:44.045 "aliases": [ 00:27:44.045 "8efd79bb-d6bf-46da-8481-25bc98d7cf27" 00:27:44.045 ], 00:27:44.045 "product_name": "Raid Volume", 00:27:44.045 "block_size": 4096, 00:27:44.045 "num_blocks": 7936, 00:27:44.045 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:44.045 "md_size": 32, 00:27:44.045 "md_interleave": false, 00:27:44.045 "dif_type": 0, 00:27:44.045 "assigned_rate_limits": { 00:27:44.045 "rw_ios_per_sec": 0, 00:27:44.045 "rw_mbytes_per_sec": 0, 00:27:44.045 "r_mbytes_per_sec": 0, 00:27:44.045 "w_mbytes_per_sec": 0 00:27:44.045 }, 00:27:44.045 "claimed": false, 00:27:44.045 "zoned": false, 00:27:44.045 "supported_io_types": { 00:27:44.045 "read": true, 00:27:44.045 "write": true, 00:27:44.045 "unmap": false, 00:27:44.045 "flush": false, 00:27:44.045 "reset": true, 00:27:44.045 "nvme_admin": false, 00:27:44.045 "nvme_io": false, 00:27:44.045 "nvme_io_md": false, 00:27:44.045 "write_zeroes": true, 00:27:44.045 "zcopy": false, 00:27:44.045 "get_zone_info": false, 00:27:44.045 "zone_management": false, 00:27:44.045 "zone_append": false, 00:27:44.045 "compare": false, 00:27:44.045 "compare_and_write": false, 00:27:44.045 "abort": false, 00:27:44.045 "seek_hole": false, 00:27:44.045 "seek_data": false, 00:27:44.045 "copy": false, 00:27:44.045 "nvme_iov_md": false 00:27:44.045 }, 00:27:44.045 "memory_domains": [ 00:27:44.045 { 00:27:44.045 "dma_device_id": "system", 00:27:44.045 "dma_device_type": 1 00:27:44.045 }, 00:27:44.045 { 00:27:44.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.045 "dma_device_type": 2 00:27:44.045 }, 00:27:44.045 { 00:27:44.045 "dma_device_id": "system", 00:27:44.045 "dma_device_type": 1 00:27:44.045 }, 00:27:44.045 { 00:27:44.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.045 "dma_device_type": 2 00:27:44.045 } 00:27:44.045 ], 00:27:44.045 "driver_specific": { 00:27:44.045 "raid": { 00:27:44.045 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:44.045 "strip_size_kb": 0, 00:27:44.045 "state": "online", 00:27:44.045 "raid_level": "raid1", 00:27:44.045 "superblock": true, 00:27:44.045 "num_base_bdevs": 2, 00:27:44.045 "num_base_bdevs_discovered": 2, 00:27:44.045 "num_base_bdevs_operational": 2, 00:27:44.045 "base_bdevs_list": [ 00:27:44.045 { 00:27:44.045 "name": "pt1", 00:27:44.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:44.045 "is_configured": true, 00:27:44.045 "data_offset": 256, 00:27:44.045 "data_size": 7936 00:27:44.045 }, 00:27:44.045 { 00:27:44.045 "name": "pt2", 00:27:44.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:44.045 "is_configured": true, 00:27:44.045 "data_offset": 256, 00:27:44.045 "data_size": 7936 00:27:44.045 } 00:27:44.045 ] 00:27:44.045 } 00:27:44.045 } 00:27:44.045 }' 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:44.045 pt2' 00:27:44.045 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:44.306 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 [2024-11-08 17:15:20.826636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8efd79bb-d6bf-46da-8481-25bc98d7cf27 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 8efd79bb-d6bf-46da-8481-25bc98d7cf27 ']' 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 [2024-11-08 17:15:20.850319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:44.307 [2024-11-08 17:15:20.850344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:44.307 [2024-11-08 17:15:20.850429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:44.307 [2024-11-08 17:15:20.850494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:44.307 [2024-11-08 17:15:20.850511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 [2024-11-08 17:15:20.942366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:44.307 [2024-11-08 17:15:20.944349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:44.307 [2024-11-08 17:15:20.944434] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:44.307 [2024-11-08 17:15:20.944483] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:44.307 [2024-11-08 17:15:20.944498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:44.307 [2024-11-08 17:15:20.944510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:44.307 request: 00:27:44.307 { 00:27:44.307 "name": "raid_bdev1", 00:27:44.307 "raid_level": "raid1", 00:27:44.307 "base_bdevs": [ 00:27:44.307 "malloc1", 00:27:44.307 "malloc2" 00:27:44.307 ], 00:27:44.307 "superblock": false, 00:27:44.307 "method": "bdev_raid_create", 00:27:44.307 "req_id": 1 00:27:44.307 } 00:27:44.307 Got JSON-RPC error response 00:27:44.307 response: 00:27:44.307 { 00:27:44.307 "code": -17, 00:27:44.307 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:44.307 } 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 [2024-11-08 17:15:20.986372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:44.307 [2024-11-08 17:15:20.986429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.307 [2024-11-08 17:15:20.986446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:44.307 [2024-11-08 17:15:20.986458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.307 [2024-11-08 17:15:20.988536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.307 [2024-11-08 17:15:20.988575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:44.307 [2024-11-08 17:15:20.988626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:44.307 [2024-11-08 17:15:20.988682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:44.307 pt1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.307 17:15:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.307 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.307 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.307 "name": "raid_bdev1", 00:27:44.307 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:44.307 "strip_size_kb": 0, 00:27:44.307 "state": "configuring", 00:27:44.307 "raid_level": "raid1", 00:27:44.307 "superblock": true, 00:27:44.308 "num_base_bdevs": 2, 00:27:44.308 "num_base_bdevs_discovered": 1, 00:27:44.308 "num_base_bdevs_operational": 2, 00:27:44.308 "base_bdevs_list": [ 00:27:44.308 { 00:27:44.308 "name": "pt1", 00:27:44.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:44.308 "is_configured": true, 00:27:44.308 "data_offset": 256, 00:27:44.308 "data_size": 7936 00:27:44.308 }, 00:27:44.308 { 00:27:44.308 "name": null, 00:27:44.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:44.308 "is_configured": false, 00:27:44.308 "data_offset": 256, 00:27:44.308 "data_size": 7936 00:27:44.308 } 00:27:44.308 ] 00:27:44.308 }' 00:27:44.308 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.308 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.876 [2024-11-08 17:15:21.290463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:44.876 [2024-11-08 17:15:21.290539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.876 [2024-11-08 17:15:21.290560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:44.876 [2024-11-08 17:15:21.290571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.876 [2024-11-08 17:15:21.290827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.876 [2024-11-08 17:15:21.290844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:44.876 [2024-11-08 17:15:21.290895] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:44.876 [2024-11-08 17:15:21.290918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:44.876 [2024-11-08 17:15:21.291031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:44.876 [2024-11-08 17:15:21.291054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:44.876 [2024-11-08 17:15:21.291124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:44.876 [2024-11-08 17:15:21.291226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:44.876 [2024-11-08 17:15:21.291241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:44.876 [2024-11-08 17:15:21.291334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.876 pt2 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.876 "name": "raid_bdev1", 00:27:44.876 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:44.876 "strip_size_kb": 0, 00:27:44.876 "state": "online", 00:27:44.876 "raid_level": "raid1", 00:27:44.876 "superblock": true, 00:27:44.876 "num_base_bdevs": 2, 00:27:44.876 "num_base_bdevs_discovered": 2, 00:27:44.876 "num_base_bdevs_operational": 2, 00:27:44.876 "base_bdevs_list": [ 00:27:44.876 { 00:27:44.876 "name": "pt1", 00:27:44.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:44.876 "is_configured": true, 00:27:44.876 "data_offset": 256, 00:27:44.876 "data_size": 7936 00:27:44.876 }, 00:27:44.876 { 00:27:44.876 "name": "pt2", 00:27:44.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:44.876 "is_configured": true, 00:27:44.876 "data_offset": 256, 00:27:44.876 "data_size": 7936 00:27:44.876 } 00:27:44.876 ] 00:27:44.876 }' 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.876 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 [2024-11-08 17:15:21.618875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.151 "name": "raid_bdev1", 00:27:45.151 "aliases": [ 00:27:45.151 "8efd79bb-d6bf-46da-8481-25bc98d7cf27" 00:27:45.151 ], 00:27:45.151 "product_name": "Raid Volume", 00:27:45.151 "block_size": 4096, 00:27:45.151 "num_blocks": 7936, 00:27:45.151 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:45.151 "md_size": 32, 00:27:45.151 "md_interleave": false, 00:27:45.151 "dif_type": 0, 00:27:45.151 "assigned_rate_limits": { 00:27:45.151 "rw_ios_per_sec": 0, 00:27:45.151 "rw_mbytes_per_sec": 0, 00:27:45.151 "r_mbytes_per_sec": 0, 00:27:45.151 "w_mbytes_per_sec": 0 00:27:45.151 }, 00:27:45.151 "claimed": false, 00:27:45.151 "zoned": false, 00:27:45.151 "supported_io_types": { 00:27:45.151 "read": true, 00:27:45.151 "write": true, 00:27:45.151 "unmap": false, 00:27:45.151 "flush": false, 00:27:45.151 "reset": true, 00:27:45.151 "nvme_admin": false, 00:27:45.151 "nvme_io": false, 00:27:45.151 "nvme_io_md": false, 00:27:45.151 "write_zeroes": true, 00:27:45.151 "zcopy": false, 00:27:45.151 "get_zone_info": false, 00:27:45.151 "zone_management": false, 00:27:45.151 "zone_append": false, 00:27:45.151 "compare": false, 00:27:45.151 "compare_and_write": false, 00:27:45.151 "abort": false, 00:27:45.151 "seek_hole": false, 00:27:45.151 "seek_data": false, 00:27:45.151 "copy": false, 00:27:45.151 "nvme_iov_md": false 00:27:45.151 }, 00:27:45.151 "memory_domains": [ 00:27:45.151 { 00:27:45.151 "dma_device_id": "system", 00:27:45.151 "dma_device_type": 1 00:27:45.151 }, 00:27:45.151 { 00:27:45.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.151 "dma_device_type": 2 00:27:45.151 }, 00:27:45.151 { 00:27:45.151 "dma_device_id": "system", 00:27:45.151 "dma_device_type": 1 00:27:45.151 }, 00:27:45.151 { 00:27:45.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.151 "dma_device_type": 2 00:27:45.151 } 00:27:45.151 ], 00:27:45.151 "driver_specific": { 00:27:45.151 "raid": { 00:27:45.151 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:45.151 "strip_size_kb": 0, 00:27:45.151 "state": "online", 00:27:45.151 "raid_level": "raid1", 00:27:45.151 "superblock": true, 00:27:45.151 "num_base_bdevs": 2, 00:27:45.151 "num_base_bdevs_discovered": 2, 00:27:45.151 "num_base_bdevs_operational": 2, 00:27:45.151 "base_bdevs_list": [ 00:27:45.151 { 00:27:45.151 "name": "pt1", 00:27:45.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:45.151 "is_configured": true, 00:27:45.151 "data_offset": 256, 00:27:45.151 "data_size": 7936 00:27:45.151 }, 00:27:45.151 { 00:27:45.151 "name": "pt2", 00:27:45.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:45.151 "is_configured": true, 00:27:45.151 "data_offset": 256, 00:27:45.151 "data_size": 7936 00:27:45.151 } 00:27:45.151 ] 00:27:45.151 } 00:27:45.151 } 00:27:45.151 }' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:45.151 pt2' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:45.151 [2024-11-08 17:15:21.770873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 8efd79bb-d6bf-46da-8481-25bc98d7cf27 '!=' 8efd79bb-d6bf-46da-8481-25bc98d7cf27 ']' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 [2024-11-08 17:15:21.802605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.151 "name": "raid_bdev1", 00:27:45.151 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:45.151 "strip_size_kb": 0, 00:27:45.151 "state": "online", 00:27:45.152 "raid_level": "raid1", 00:27:45.152 "superblock": true, 00:27:45.152 "num_base_bdevs": 2, 00:27:45.152 "num_base_bdevs_discovered": 1, 00:27:45.152 "num_base_bdevs_operational": 1, 00:27:45.152 "base_bdevs_list": [ 00:27:45.152 { 00:27:45.152 "name": null, 00:27:45.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.152 "is_configured": false, 00:27:45.152 "data_offset": 0, 00:27:45.152 "data_size": 7936 00:27:45.152 }, 00:27:45.152 { 00:27:45.152 "name": "pt2", 00:27:45.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:45.152 "is_configured": true, 00:27:45.152 "data_offset": 256, 00:27:45.152 "data_size": 7936 00:27:45.152 } 00:27:45.152 ] 00:27:45.152 }' 00:27:45.152 17:15:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.152 17:15:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.410 [2024-11-08 17:15:22.114636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:45.410 [2024-11-08 17:15:22.114663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:45.410 [2024-11-08 17:15:22.114736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:45.410 [2024-11-08 17:15:22.114793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:45.410 [2024-11-08 17:15:22.114803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.410 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.671 [2024-11-08 17:15:22.166636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:45.671 [2024-11-08 17:15:22.166691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:45.671 [2024-11-08 17:15:22.166706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:45.671 [2024-11-08 17:15:22.166717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:45.671 [2024-11-08 17:15:22.168570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:45.671 [2024-11-08 17:15:22.168605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:45.671 [2024-11-08 17:15:22.168652] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:45.671 [2024-11-08 17:15:22.168692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:45.671 [2024-11-08 17:15:22.168779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:45.671 [2024-11-08 17:15:22.168796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:45.671 [2024-11-08 17:15:22.168863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:45.671 [2024-11-08 17:15:22.168951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:45.671 [2024-11-08 17:15:22.168958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:45.671 [2024-11-08 17:15:22.169037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.671 pt2 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.671 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.671 "name": "raid_bdev1", 00:27:45.671 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:45.671 "strip_size_kb": 0, 00:27:45.671 "state": "online", 00:27:45.671 "raid_level": "raid1", 00:27:45.671 "superblock": true, 00:27:45.671 "num_base_bdevs": 2, 00:27:45.671 "num_base_bdevs_discovered": 1, 00:27:45.671 "num_base_bdevs_operational": 1, 00:27:45.671 "base_bdevs_list": [ 00:27:45.671 { 00:27:45.671 "name": null, 00:27:45.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.671 "is_configured": false, 00:27:45.671 "data_offset": 256, 00:27:45.671 "data_size": 7936 00:27:45.671 }, 00:27:45.671 { 00:27:45.671 "name": "pt2", 00:27:45.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:45.671 "is_configured": true, 00:27:45.672 "data_offset": 256, 00:27:45.672 "data_size": 7936 00:27:45.672 } 00:27:45.672 ] 00:27:45.672 }' 00:27:45.672 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.672 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.930 [2024-11-08 17:15:22.474694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:45.930 [2024-11-08 17:15:22.474722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:45.930 [2024-11-08 17:15:22.474801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:45.930 [2024-11-08 17:15:22.474851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:45.930 [2024-11-08 17:15:22.474859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.930 [2024-11-08 17:15:22.518716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:45.930 [2024-11-08 17:15:22.518781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:45.930 [2024-11-08 17:15:22.518798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:45.930 [2024-11-08 17:15:22.518807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:45.930 [2024-11-08 17:15:22.520644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:45.930 [2024-11-08 17:15:22.520676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:45.930 [2024-11-08 17:15:22.520725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:45.930 [2024-11-08 17:15:22.520776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:45.930 [2024-11-08 17:15:22.520905] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:45.930 [2024-11-08 17:15:22.520919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:45.930 [2024-11-08 17:15:22.520935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:45.930 [2024-11-08 17:15:22.520981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:45.930 [2024-11-08 17:15:22.521035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:45.930 [2024-11-08 17:15:22.521043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:45.930 [2024-11-08 17:15:22.521107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:45.930 [2024-11-08 17:15:22.521193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:45.930 [2024-11-08 17:15:22.521206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:45.930 [2024-11-08 17:15:22.521285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.930 pt1 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.930 "name": "raid_bdev1", 00:27:45.930 "uuid": "8efd79bb-d6bf-46da-8481-25bc98d7cf27", 00:27:45.930 "strip_size_kb": 0, 00:27:45.930 "state": "online", 00:27:45.930 "raid_level": "raid1", 00:27:45.930 "superblock": true, 00:27:45.930 "num_base_bdevs": 2, 00:27:45.930 "num_base_bdevs_discovered": 1, 00:27:45.930 "num_base_bdevs_operational": 1, 00:27:45.930 "base_bdevs_list": [ 00:27:45.930 { 00:27:45.930 "name": null, 00:27:45.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.930 "is_configured": false, 00:27:45.930 "data_offset": 256, 00:27:45.930 "data_size": 7936 00:27:45.930 }, 00:27:45.930 { 00:27:45.930 "name": "pt2", 00:27:45.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:45.930 "is_configured": true, 00:27:45.930 "data_offset": 256, 00:27:45.930 "data_size": 7936 00:27:45.930 } 00:27:45.930 ] 00:27:45.930 }' 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.930 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.188 [2024-11-08 17:15:22.883021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:46.188 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 8efd79bb-d6bf-46da-8481-25bc98d7cf27 '!=' 8efd79bb-d6bf-46da-8481-25bc98d7cf27 ']' 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 85629 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 85629 ']' 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 85629 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85629 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:46.445 killing process with pid 85629 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85629' 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 85629 00:27:46.445 17:15:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 85629 00:27:46.445 [2024-11-08 17:15:22.930735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:46.445 [2024-11-08 17:15:22.930842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:46.445 [2024-11-08 17:15:22.930899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:46.445 [2024-11-08 17:15:22.930916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:46.445 [2024-11-08 17:15:23.046601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:47.010 17:15:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:27:47.010 00:27:47.010 real 0m4.275s 00:27:47.010 user 0m6.509s 00:27:47.010 sys 0m0.714s 00:27:47.010 17:15:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:47.010 17:15:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.010 ************************************ 00:27:47.010 END TEST raid_superblock_test_md_separate 00:27:47.010 ************************************ 00:27:47.010 17:15:23 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:27:47.010 17:15:23 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:27:47.010 17:15:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:27:47.010 17:15:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:47.010 17:15:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:47.010 ************************************ 00:27:47.010 START TEST raid_rebuild_test_sb_md_separate 00:27:47.010 ************************************ 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=85935 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 85935 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 85935 ']' 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:47.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:47.010 17:15:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.267 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:47.267 Zero copy mechanism will not be used. 00:27:47.267 [2024-11-08 17:15:23.752487] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:27:47.267 [2024-11-08 17:15:23.752618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85935 ] 00:27:47.267 [2024-11-08 17:15:23.912794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.525 [2024-11-08 17:15:24.014462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.525 [2024-11-08 17:15:24.134435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:47.525 [2024-11-08 17:15:24.134485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.091 BaseBdev1_malloc 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.091 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.091 [2024-11-08 17:15:24.627596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:48.091 [2024-11-08 17:15:24.627654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.091 [2024-11-08 17:15:24.627676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:48.091 [2024-11-08 17:15:24.627686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.092 [2024-11-08 17:15:24.629369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.092 [2024-11-08 17:15:24.629403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:48.092 BaseBdev1 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.092 BaseBdev2_malloc 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.092 [2024-11-08 17:15:24.661405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:48.092 [2024-11-08 17:15:24.661454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.092 [2024-11-08 17:15:24.661471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:48.092 [2024-11-08 17:15:24.661481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.092 [2024-11-08 17:15:24.663151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.092 [2024-11-08 17:15:24.663182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:48.092 BaseBdev2 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.092 spare_malloc 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.092 spare_delay 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.092 [2024-11-08 17:15:24.720191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:48.092 [2024-11-08 17:15:24.720238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.092 [2024-11-08 17:15:24.720255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:48.092 [2024-11-08 17:15:24.720265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.092 [2024-11-08 17:15:24.721978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.092 [2024-11-08 17:15:24.722007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:48.092 spare 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.092 [2024-11-08 17:15:24.728230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:48.092 [2024-11-08 17:15:24.729840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:48.092 [2024-11-08 17:15:24.729984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:48.092 [2024-11-08 17:15:24.730001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:48.092 [2024-11-08 17:15:24.730063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:48.092 [2024-11-08 17:15:24.730167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:48.092 [2024-11-08 17:15:24.730179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:48.092 [2024-11-08 17:15:24.730263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:48.092 "name": "raid_bdev1", 00:27:48.092 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:48.092 "strip_size_kb": 0, 00:27:48.092 "state": "online", 00:27:48.092 "raid_level": "raid1", 00:27:48.092 "superblock": true, 00:27:48.092 "num_base_bdevs": 2, 00:27:48.092 "num_base_bdevs_discovered": 2, 00:27:48.092 "num_base_bdevs_operational": 2, 00:27:48.092 "base_bdevs_list": [ 00:27:48.092 { 00:27:48.092 "name": "BaseBdev1", 00:27:48.092 "uuid": "e8b3fa55-7184-5563-aade-0de0a8b11619", 00:27:48.092 "is_configured": true, 00:27:48.092 "data_offset": 256, 00:27:48.092 "data_size": 7936 00:27:48.092 }, 00:27:48.092 { 00:27:48.092 "name": "BaseBdev2", 00:27:48.092 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:48.092 "is_configured": true, 00:27:48.092 "data_offset": 256, 00:27:48.092 "data_size": 7936 00:27:48.092 } 00:27:48.092 ] 00:27:48.092 }' 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:48.092 17:15:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.351 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:48.351 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:48.351 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.351 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.351 [2024-11-08 17:15:25.048569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:48.351 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:48.608 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:48.609 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:48.609 [2024-11-08 17:15:25.312429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:48.866 /dev/nbd0 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:48.866 1+0 records in 00:27:48.866 1+0 records out 00:27:48.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309532 s, 13.2 MB/s 00:27:48.866 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:48.867 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:49.432 7936+0 records in 00:27:49.432 7936+0 records out 00:27:49.432 32505856 bytes (33 MB, 31 MiB) copied, 0.58457 s, 55.6 MB/s 00:27:49.432 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:49.432 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:49.432 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:49.432 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:49.432 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:49.432 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:49.432 17:15:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:49.691 [2024-11-08 17:15:26.162829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.691 [2024-11-08 17:15:26.170913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.691 "name": "raid_bdev1", 00:27:49.691 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:49.691 "strip_size_kb": 0, 00:27:49.691 "state": "online", 00:27:49.691 "raid_level": "raid1", 00:27:49.691 "superblock": true, 00:27:49.691 "num_base_bdevs": 2, 00:27:49.691 "num_base_bdevs_discovered": 1, 00:27:49.691 "num_base_bdevs_operational": 1, 00:27:49.691 "base_bdevs_list": [ 00:27:49.691 { 00:27:49.691 "name": null, 00:27:49.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.691 "is_configured": false, 00:27:49.691 "data_offset": 0, 00:27:49.691 "data_size": 7936 00:27:49.691 }, 00:27:49.691 { 00:27:49.691 "name": "BaseBdev2", 00:27:49.691 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:49.691 "is_configured": true, 00:27:49.691 "data_offset": 256, 00:27:49.691 "data_size": 7936 00:27:49.691 } 00:27:49.691 ] 00:27:49.691 }' 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.691 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.949 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:49.949 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.949 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.949 [2024-11-08 17:15:26.475004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:49.949 [2024-11-08 17:15:26.483145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:27:49.949 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.949 17:15:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:49.949 [2024-11-08 17:15:26.484789] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:50.883 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.883 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:50.884 "name": "raid_bdev1", 00:27:50.884 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:50.884 "strip_size_kb": 0, 00:27:50.884 "state": "online", 00:27:50.884 "raid_level": "raid1", 00:27:50.884 "superblock": true, 00:27:50.884 "num_base_bdevs": 2, 00:27:50.884 "num_base_bdevs_discovered": 2, 00:27:50.884 "num_base_bdevs_operational": 2, 00:27:50.884 "process": { 00:27:50.884 "type": "rebuild", 00:27:50.884 "target": "spare", 00:27:50.884 "progress": { 00:27:50.884 "blocks": 2560, 00:27:50.884 "percent": 32 00:27:50.884 } 00:27:50.884 }, 00:27:50.884 "base_bdevs_list": [ 00:27:50.884 { 00:27:50.884 "name": "spare", 00:27:50.884 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:50.884 "is_configured": true, 00:27:50.884 "data_offset": 256, 00:27:50.884 "data_size": 7936 00:27:50.884 }, 00:27:50.884 { 00:27:50.884 "name": "BaseBdev2", 00:27:50.884 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:50.884 "is_configured": true, 00:27:50.884 "data_offset": 256, 00:27:50.884 "data_size": 7936 00:27:50.884 } 00:27:50.884 ] 00:27:50.884 }' 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.884 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.884 [2024-11-08 17:15:27.591461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:51.142 [2024-11-08 17:15:27.691634] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:51.142 [2024-11-08 17:15:27.691695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.142 [2024-11-08 17:15:27.691709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:51.142 [2024-11-08 17:15:27.691718] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.142 "name": "raid_bdev1", 00:27:51.142 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:51.142 "strip_size_kb": 0, 00:27:51.142 "state": "online", 00:27:51.142 "raid_level": "raid1", 00:27:51.142 "superblock": true, 00:27:51.142 "num_base_bdevs": 2, 00:27:51.142 "num_base_bdevs_discovered": 1, 00:27:51.142 "num_base_bdevs_operational": 1, 00:27:51.142 "base_bdevs_list": [ 00:27:51.142 { 00:27:51.142 "name": null, 00:27:51.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.142 "is_configured": false, 00:27:51.142 "data_offset": 0, 00:27:51.142 "data_size": 7936 00:27:51.142 }, 00:27:51.142 { 00:27:51.142 "name": "BaseBdev2", 00:27:51.142 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:51.142 "is_configured": true, 00:27:51.142 "data_offset": 256, 00:27:51.142 "data_size": 7936 00:27:51.142 } 00:27:51.142 ] 00:27:51.142 }' 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.142 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.401 17:15:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:51.401 "name": "raid_bdev1", 00:27:51.401 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:51.401 "strip_size_kb": 0, 00:27:51.401 "state": "online", 00:27:51.401 "raid_level": "raid1", 00:27:51.401 "superblock": true, 00:27:51.401 "num_base_bdevs": 2, 00:27:51.401 "num_base_bdevs_discovered": 1, 00:27:51.401 "num_base_bdevs_operational": 1, 00:27:51.401 "base_bdevs_list": [ 00:27:51.401 { 00:27:51.401 "name": null, 00:27:51.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.401 "is_configured": false, 00:27:51.401 "data_offset": 0, 00:27:51.401 "data_size": 7936 00:27:51.401 }, 00:27:51.401 { 00:27:51.401 "name": "BaseBdev2", 00:27:51.401 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:51.401 "is_configured": true, 00:27:51.401 "data_offset": 256, 00:27:51.401 "data_size": 7936 00:27:51.401 } 00:27:51.401 ] 00:27:51.401 }' 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.401 [2024-11-08 17:15:28.088447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:51.401 [2024-11-08 17:15:28.095954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.401 17:15:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:51.401 [2024-11-08 17:15:28.097575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:52.778 "name": "raid_bdev1", 00:27:52.778 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:52.778 "strip_size_kb": 0, 00:27:52.778 "state": "online", 00:27:52.778 "raid_level": "raid1", 00:27:52.778 "superblock": true, 00:27:52.778 "num_base_bdevs": 2, 00:27:52.778 "num_base_bdevs_discovered": 2, 00:27:52.778 "num_base_bdevs_operational": 2, 00:27:52.778 "process": { 00:27:52.778 "type": "rebuild", 00:27:52.778 "target": "spare", 00:27:52.778 "progress": { 00:27:52.778 "blocks": 2560, 00:27:52.778 "percent": 32 00:27:52.778 } 00:27:52.778 }, 00:27:52.778 "base_bdevs_list": [ 00:27:52.778 { 00:27:52.778 "name": "spare", 00:27:52.778 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:52.778 "is_configured": true, 00:27:52.778 "data_offset": 256, 00:27:52.778 "data_size": 7936 00:27:52.778 }, 00:27:52.778 { 00:27:52.778 "name": "BaseBdev2", 00:27:52.778 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:52.778 "is_configured": true, 00:27:52.778 "data_offset": 256, 00:27:52.778 "data_size": 7936 00:27:52.778 } 00:27:52.778 ] 00:27:52.778 }' 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:52.778 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:52.778 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=617 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:52.779 "name": "raid_bdev1", 00:27:52.779 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:52.779 "strip_size_kb": 0, 00:27:52.779 "state": "online", 00:27:52.779 "raid_level": "raid1", 00:27:52.779 "superblock": true, 00:27:52.779 "num_base_bdevs": 2, 00:27:52.779 "num_base_bdevs_discovered": 2, 00:27:52.779 "num_base_bdevs_operational": 2, 00:27:52.779 "process": { 00:27:52.779 "type": "rebuild", 00:27:52.779 "target": "spare", 00:27:52.779 "progress": { 00:27:52.779 "blocks": 2816, 00:27:52.779 "percent": 35 00:27:52.779 } 00:27:52.779 }, 00:27:52.779 "base_bdevs_list": [ 00:27:52.779 { 00:27:52.779 "name": "spare", 00:27:52.779 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:52.779 "is_configured": true, 00:27:52.779 "data_offset": 256, 00:27:52.779 "data_size": 7936 00:27:52.779 }, 00:27:52.779 { 00:27:52.779 "name": "BaseBdev2", 00:27:52.779 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:52.779 "is_configured": true, 00:27:52.779 "data_offset": 256, 00:27:52.779 "data_size": 7936 00:27:52.779 } 00:27:52.779 ] 00:27:52.779 }' 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.779 17:15:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:53.735 "name": "raid_bdev1", 00:27:53.735 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:53.735 "strip_size_kb": 0, 00:27:53.735 "state": "online", 00:27:53.735 "raid_level": "raid1", 00:27:53.735 "superblock": true, 00:27:53.735 "num_base_bdevs": 2, 00:27:53.735 "num_base_bdevs_discovered": 2, 00:27:53.735 "num_base_bdevs_operational": 2, 00:27:53.735 "process": { 00:27:53.735 "type": "rebuild", 00:27:53.735 "target": "spare", 00:27:53.735 "progress": { 00:27:53.735 "blocks": 5632, 00:27:53.735 "percent": 70 00:27:53.735 } 00:27:53.735 }, 00:27:53.735 "base_bdevs_list": [ 00:27:53.735 { 00:27:53.735 "name": "spare", 00:27:53.735 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:53.735 "is_configured": true, 00:27:53.735 "data_offset": 256, 00:27:53.735 "data_size": 7936 00:27:53.735 }, 00:27:53.735 { 00:27:53.735 "name": "BaseBdev2", 00:27:53.735 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:53.735 "is_configured": true, 00:27:53.735 "data_offset": 256, 00:27:53.735 "data_size": 7936 00:27:53.735 } 00:27:53.735 ] 00:27:53.735 }' 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:53.735 17:15:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:54.669 [2024-11-08 17:15:31.214809] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:54.669 [2024-11-08 17:15:31.214901] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:54.669 [2024-11-08 17:15:31.215012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:54.927 "name": "raid_bdev1", 00:27:54.927 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:54.927 "strip_size_kb": 0, 00:27:54.927 "state": "online", 00:27:54.927 "raid_level": "raid1", 00:27:54.927 "superblock": true, 00:27:54.927 "num_base_bdevs": 2, 00:27:54.927 "num_base_bdevs_discovered": 2, 00:27:54.927 "num_base_bdevs_operational": 2, 00:27:54.927 "base_bdevs_list": [ 00:27:54.927 { 00:27:54.927 "name": "spare", 00:27:54.927 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:54.927 "is_configured": true, 00:27:54.927 "data_offset": 256, 00:27:54.927 "data_size": 7936 00:27:54.927 }, 00:27:54.927 { 00:27:54.927 "name": "BaseBdev2", 00:27:54.927 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:54.927 "is_configured": true, 00:27:54.927 "data_offset": 256, 00:27:54.927 "data_size": 7936 00:27:54.927 } 00:27:54.927 ] 00:27:54.927 }' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:54.927 "name": "raid_bdev1", 00:27:54.927 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:54.927 "strip_size_kb": 0, 00:27:54.927 "state": "online", 00:27:54.927 "raid_level": "raid1", 00:27:54.927 "superblock": true, 00:27:54.927 "num_base_bdevs": 2, 00:27:54.927 "num_base_bdevs_discovered": 2, 00:27:54.927 "num_base_bdevs_operational": 2, 00:27:54.927 "base_bdevs_list": [ 00:27:54.927 { 00:27:54.927 "name": "spare", 00:27:54.927 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:54.927 "is_configured": true, 00:27:54.927 "data_offset": 256, 00:27:54.927 "data_size": 7936 00:27:54.927 }, 00:27:54.927 { 00:27:54.927 "name": "BaseBdev2", 00:27:54.927 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:54.927 "is_configured": true, 00:27:54.927 "data_offset": 256, 00:27:54.927 "data_size": 7936 00:27:54.927 } 00:27:54.927 ] 00:27:54.927 }' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.927 "name": "raid_bdev1", 00:27:54.927 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:54.927 "strip_size_kb": 0, 00:27:54.927 "state": "online", 00:27:54.927 "raid_level": "raid1", 00:27:54.927 "superblock": true, 00:27:54.927 "num_base_bdevs": 2, 00:27:54.927 "num_base_bdevs_discovered": 2, 00:27:54.927 "num_base_bdevs_operational": 2, 00:27:54.927 "base_bdevs_list": [ 00:27:54.927 { 00:27:54.927 "name": "spare", 00:27:54.927 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:54.927 "is_configured": true, 00:27:54.927 "data_offset": 256, 00:27:54.927 "data_size": 7936 00:27:54.927 }, 00:27:54.927 { 00:27:54.927 "name": "BaseBdev2", 00:27:54.927 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:54.927 "is_configured": true, 00:27:54.927 "data_offset": 256, 00:27:54.927 "data_size": 7936 00:27:54.927 } 00:27:54.927 ] 00:27:54.927 }' 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.927 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.185 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:55.185 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.185 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.185 [2024-11-08 17:15:31.895644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:55.185 [2024-11-08 17:15:31.895677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:55.185 [2024-11-08 17:15:31.895768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:55.185 [2024-11-08 17:15:31.895838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:55.185 [2024-11-08 17:15:31.895847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:55.471 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:55.472 17:15:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:55.472 /dev/nbd0 00:27:55.472 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:55.769 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:55.769 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:27:55.769 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.770 1+0 records in 00:27:55.770 1+0 records out 00:27:55.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238981 s, 17.1 MB/s 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:55.770 /dev/nbd1 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.770 1+0 records in 00:27:55.770 1+0 records out 00:27:55.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180203 s, 22.7 MB/s 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:55.770 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.028 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.029 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:56.029 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:56.029 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.029 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.029 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.287 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.287 [2024-11-08 17:15:32.968560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:56.287 [2024-11-08 17:15:32.968634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.287 [2024-11-08 17:15:32.968657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:56.287 [2024-11-08 17:15:32.968665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.287 [2024-11-08 17:15:32.970521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.287 [2024-11-08 17:15:32.970555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:56.287 [2024-11-08 17:15:32.970640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:56.287 [2024-11-08 17:15:32.970693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:56.287 [2024-11-08 17:15:32.970822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:56.287 spare 00:27:56.288 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.288 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:56.288 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.288 17:15:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.546 [2024-11-08 17:15:33.070904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:56.546 [2024-11-08 17:15:33.070966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:56.546 [2024-11-08 17:15:33.071097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:27:56.546 [2024-11-08 17:15:33.071257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:56.546 [2024-11-08 17:15:33.071265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:56.546 [2024-11-08 17:15:33.071389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:56.547 "name": "raid_bdev1", 00:27:56.547 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:56.547 "strip_size_kb": 0, 00:27:56.547 "state": "online", 00:27:56.547 "raid_level": "raid1", 00:27:56.547 "superblock": true, 00:27:56.547 "num_base_bdevs": 2, 00:27:56.547 "num_base_bdevs_discovered": 2, 00:27:56.547 "num_base_bdevs_operational": 2, 00:27:56.547 "base_bdevs_list": [ 00:27:56.547 { 00:27:56.547 "name": "spare", 00:27:56.547 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:56.547 "is_configured": true, 00:27:56.547 "data_offset": 256, 00:27:56.547 "data_size": 7936 00:27:56.547 }, 00:27:56.547 { 00:27:56.547 "name": "BaseBdev2", 00:27:56.547 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:56.547 "is_configured": true, 00:27:56.547 "data_offset": 256, 00:27:56.547 "data_size": 7936 00:27:56.547 } 00:27:56.547 ] 00:27:56.547 }' 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:56.547 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:56.806 "name": "raid_bdev1", 00:27:56.806 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:56.806 "strip_size_kb": 0, 00:27:56.806 "state": "online", 00:27:56.806 "raid_level": "raid1", 00:27:56.806 "superblock": true, 00:27:56.806 "num_base_bdevs": 2, 00:27:56.806 "num_base_bdevs_discovered": 2, 00:27:56.806 "num_base_bdevs_operational": 2, 00:27:56.806 "base_bdevs_list": [ 00:27:56.806 { 00:27:56.806 "name": "spare", 00:27:56.806 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:56.806 "is_configured": true, 00:27:56.806 "data_offset": 256, 00:27:56.806 "data_size": 7936 00:27:56.806 }, 00:27:56.806 { 00:27:56.806 "name": "BaseBdev2", 00:27:56.806 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:56.806 "is_configured": true, 00:27:56.806 "data_offset": 256, 00:27:56.806 "data_size": 7936 00:27:56.806 } 00:27:56.806 ] 00:27:56.806 }' 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:56.806 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:57.064 [2024-11-08 17:15:33.552698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.064 "name": "raid_bdev1", 00:27:57.064 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:57.064 "strip_size_kb": 0, 00:27:57.064 "state": "online", 00:27:57.064 "raid_level": "raid1", 00:27:57.064 "superblock": true, 00:27:57.064 "num_base_bdevs": 2, 00:27:57.064 "num_base_bdevs_discovered": 1, 00:27:57.064 "num_base_bdevs_operational": 1, 00:27:57.064 "base_bdevs_list": [ 00:27:57.064 { 00:27:57.064 "name": null, 00:27:57.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.064 "is_configured": false, 00:27:57.064 "data_offset": 0, 00:27:57.064 "data_size": 7936 00:27:57.064 }, 00:27:57.064 { 00:27:57.064 "name": "BaseBdev2", 00:27:57.064 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:57.064 "is_configured": true, 00:27:57.064 "data_offset": 256, 00:27:57.064 "data_size": 7936 00:27:57.064 } 00:27:57.064 ] 00:27:57.064 }' 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.064 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:57.322 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:57.322 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.322 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:57.322 [2024-11-08 17:15:33.880799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:57.322 [2024-11-08 17:15:33.880996] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:57.322 [2024-11-08 17:15:33.881011] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:57.322 [2024-11-08 17:15:33.881050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:57.322 [2024-11-08 17:15:33.888633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:27:57.322 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.322 17:15:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:57.322 [2024-11-08 17:15:33.890418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:58.256 "name": "raid_bdev1", 00:27:58.256 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:58.256 "strip_size_kb": 0, 00:27:58.256 "state": "online", 00:27:58.256 "raid_level": "raid1", 00:27:58.256 "superblock": true, 00:27:58.256 "num_base_bdevs": 2, 00:27:58.256 "num_base_bdevs_discovered": 2, 00:27:58.256 "num_base_bdevs_operational": 2, 00:27:58.256 "process": { 00:27:58.256 "type": "rebuild", 00:27:58.256 "target": "spare", 00:27:58.256 "progress": { 00:27:58.256 "blocks": 2560, 00:27:58.256 "percent": 32 00:27:58.256 } 00:27:58.256 }, 00:27:58.256 "base_bdevs_list": [ 00:27:58.256 { 00:27:58.256 "name": "spare", 00:27:58.256 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:58.256 "is_configured": true, 00:27:58.256 "data_offset": 256, 00:27:58.256 "data_size": 7936 00:27:58.256 }, 00:27:58.256 { 00:27:58.256 "name": "BaseBdev2", 00:27:58.256 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:58.256 "is_configured": true, 00:27:58.256 "data_offset": 256, 00:27:58.256 "data_size": 7936 00:27:58.256 } 00:27:58.256 ] 00:27:58.256 }' 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:58.256 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:58.515 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:58.515 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:58.515 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.515 17:15:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.515 [2024-11-08 17:15:34.996898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.515 [2024-11-08 17:15:34.997281] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:58.515 [2024-11-08 17:15:34.997327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.515 [2024-11-08 17:15:34.997339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.515 [2024-11-08 17:15:34.997347] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.515 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.516 "name": "raid_bdev1", 00:27:58.516 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:58.516 "strip_size_kb": 0, 00:27:58.516 "state": "online", 00:27:58.516 "raid_level": "raid1", 00:27:58.516 "superblock": true, 00:27:58.516 "num_base_bdevs": 2, 00:27:58.516 "num_base_bdevs_discovered": 1, 00:27:58.516 "num_base_bdevs_operational": 1, 00:27:58.516 "base_bdevs_list": [ 00:27:58.516 { 00:27:58.516 "name": null, 00:27:58.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.516 "is_configured": false, 00:27:58.516 "data_offset": 0, 00:27:58.516 "data_size": 7936 00:27:58.516 }, 00:27:58.516 { 00:27:58.516 "name": "BaseBdev2", 00:27:58.516 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:58.516 "is_configured": true, 00:27:58.516 "data_offset": 256, 00:27:58.516 "data_size": 7936 00:27:58.516 } 00:27:58.516 ] 00:27:58.516 }' 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.516 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.774 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:58.774 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.774 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.774 [2024-11-08 17:15:35.346529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:58.774 [2024-11-08 17:15:35.346834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.774 [2024-11-08 17:15:35.346916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:58.774 [2024-11-08 17:15:35.346971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.774 [2024-11-08 17:15:35.347241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.774 [2024-11-08 17:15:35.347332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:58.774 [2024-11-08 17:15:35.347448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:58.774 [2024-11-08 17:15:35.347478] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:58.774 [2024-11-08 17:15:35.347563] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:58.774 [2024-11-08 17:15:35.347604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:58.774 [2024-11-08 17:15:35.355306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:27:58.774 spare 00:27:58.774 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.774 17:15:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:58.774 [2024-11-08 17:15:35.357114] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.710 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:59.710 "name": "raid_bdev1", 00:27:59.710 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:59.710 "strip_size_kb": 0, 00:27:59.710 "state": "online", 00:27:59.710 "raid_level": "raid1", 00:27:59.710 "superblock": true, 00:27:59.710 "num_base_bdevs": 2, 00:27:59.710 "num_base_bdevs_discovered": 2, 00:27:59.710 "num_base_bdevs_operational": 2, 00:27:59.710 "process": { 00:27:59.710 "type": "rebuild", 00:27:59.710 "target": "spare", 00:27:59.710 "progress": { 00:27:59.711 "blocks": 2560, 00:27:59.711 "percent": 32 00:27:59.711 } 00:27:59.711 }, 00:27:59.711 "base_bdevs_list": [ 00:27:59.711 { 00:27:59.711 "name": "spare", 00:27:59.711 "uuid": "c0eba2fa-575a-515c-83b2-715ac7f9e9b8", 00:27:59.711 "is_configured": true, 00:27:59.711 "data_offset": 256, 00:27:59.711 "data_size": 7936 00:27:59.711 }, 00:27:59.711 { 00:27:59.711 "name": "BaseBdev2", 00:27:59.711 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:59.711 "is_configured": true, 00:27:59.711 "data_offset": 256, 00:27:59.711 "data_size": 7936 00:27:59.711 } 00:27:59.711 ] 00:27:59.711 }' 00:27:59.711 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:59.969 [2024-11-08 17:15:36.464009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.969 [2024-11-08 17:15:36.464433] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:59.969 [2024-11-08 17:15:36.464477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.969 [2024-11-08 17:15:36.464491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.969 [2024-11-08 17:15:36.464497] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:59.969 "name": "raid_bdev1", 00:27:59.969 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:27:59.969 "strip_size_kb": 0, 00:27:59.969 "state": "online", 00:27:59.969 "raid_level": "raid1", 00:27:59.969 "superblock": true, 00:27:59.969 "num_base_bdevs": 2, 00:27:59.969 "num_base_bdevs_discovered": 1, 00:27:59.969 "num_base_bdevs_operational": 1, 00:27:59.969 "base_bdevs_list": [ 00:27:59.969 { 00:27:59.969 "name": null, 00:27:59.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.969 "is_configured": false, 00:27:59.969 "data_offset": 0, 00:27:59.969 "data_size": 7936 00:27:59.969 }, 00:27:59.969 { 00:27:59.969 "name": "BaseBdev2", 00:27:59.969 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:27:59.969 "is_configured": true, 00:27:59.969 "data_offset": 256, 00:27:59.969 "data_size": 7936 00:27:59.969 } 00:27:59.969 ] 00:27:59.969 }' 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:59.969 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.226 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:00.226 "name": "raid_bdev1", 00:28:00.227 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:28:00.227 "strip_size_kb": 0, 00:28:00.227 "state": "online", 00:28:00.227 "raid_level": "raid1", 00:28:00.227 "superblock": true, 00:28:00.227 "num_base_bdevs": 2, 00:28:00.227 "num_base_bdevs_discovered": 1, 00:28:00.227 "num_base_bdevs_operational": 1, 00:28:00.227 "base_bdevs_list": [ 00:28:00.227 { 00:28:00.227 "name": null, 00:28:00.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.227 "is_configured": false, 00:28:00.227 "data_offset": 0, 00:28:00.227 "data_size": 7936 00:28:00.227 }, 00:28:00.227 { 00:28:00.227 "name": "BaseBdev2", 00:28:00.227 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:28:00.227 "is_configured": true, 00:28:00.227 "data_offset": 256, 00:28:00.227 "data_size": 7936 00:28:00.227 } 00:28:00.227 ] 00:28:00.227 }' 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.227 [2024-11-08 17:15:36.885183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:00.227 [2024-11-08 17:15:36.885328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.227 [2024-11-08 17:15:36.885368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:00.227 [2024-11-08 17:15:36.885441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.227 [2024-11-08 17:15:36.885660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.227 [2024-11-08 17:15:36.885719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:00.227 [2024-11-08 17:15:36.885877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:00.227 [2024-11-08 17:15:36.885934] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:00.227 [2024-11-08 17:15:36.885947] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:00.227 [2024-11-08 17:15:36.885956] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:28:00.227 BaseBdev1 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.227 17:15:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.600 "name": "raid_bdev1", 00:28:01.600 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:28:01.600 "strip_size_kb": 0, 00:28:01.600 "state": "online", 00:28:01.600 "raid_level": "raid1", 00:28:01.600 "superblock": true, 00:28:01.600 "num_base_bdevs": 2, 00:28:01.600 "num_base_bdevs_discovered": 1, 00:28:01.600 "num_base_bdevs_operational": 1, 00:28:01.600 "base_bdevs_list": [ 00:28:01.600 { 00:28:01.600 "name": null, 00:28:01.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.600 "is_configured": false, 00:28:01.600 "data_offset": 0, 00:28:01.600 "data_size": 7936 00:28:01.600 }, 00:28:01.600 { 00:28:01.600 "name": "BaseBdev2", 00:28:01.600 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:28:01.600 "is_configured": true, 00:28:01.600 "data_offset": 256, 00:28:01.600 "data_size": 7936 00:28:01.600 } 00:28:01.600 ] 00:28:01.600 }' 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.600 17:15:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:01.600 "name": "raid_bdev1", 00:28:01.600 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:28:01.600 "strip_size_kb": 0, 00:28:01.600 "state": "online", 00:28:01.600 "raid_level": "raid1", 00:28:01.600 "superblock": true, 00:28:01.600 "num_base_bdevs": 2, 00:28:01.600 "num_base_bdevs_discovered": 1, 00:28:01.600 "num_base_bdevs_operational": 1, 00:28:01.600 "base_bdevs_list": [ 00:28:01.600 { 00:28:01.600 "name": null, 00:28:01.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.600 "is_configured": false, 00:28:01.600 "data_offset": 0, 00:28:01.600 "data_size": 7936 00:28:01.600 }, 00:28:01.600 { 00:28:01.600 "name": "BaseBdev2", 00:28:01.600 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:28:01.600 "is_configured": true, 00:28:01.600 "data_offset": 256, 00:28:01.600 "data_size": 7936 00:28:01.600 } 00:28:01.600 ] 00:28:01.600 }' 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:01.600 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.858 [2024-11-08 17:15:38.325531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.858 [2024-11-08 17:15:38.325688] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:01.858 [2024-11-08 17:15:38.325699] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:01.858 request: 00:28:01.858 { 00:28:01.858 "base_bdev": "BaseBdev1", 00:28:01.858 "raid_bdev": "raid_bdev1", 00:28:01.858 "method": "bdev_raid_add_base_bdev", 00:28:01.858 "req_id": 1 00:28:01.858 } 00:28:01.858 Got JSON-RPC error response 00:28:01.858 response: 00:28:01.858 { 00:28:01.858 "code": -22, 00:28:01.858 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:01.858 } 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.858 17:15:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.792 "name": "raid_bdev1", 00:28:02.792 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:28:02.792 "strip_size_kb": 0, 00:28:02.792 "state": "online", 00:28:02.792 "raid_level": "raid1", 00:28:02.792 "superblock": true, 00:28:02.792 "num_base_bdevs": 2, 00:28:02.792 "num_base_bdevs_discovered": 1, 00:28:02.792 "num_base_bdevs_operational": 1, 00:28:02.792 "base_bdevs_list": [ 00:28:02.792 { 00:28:02.792 "name": null, 00:28:02.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.792 "is_configured": false, 00:28:02.792 "data_offset": 0, 00:28:02.792 "data_size": 7936 00:28:02.792 }, 00:28:02.792 { 00:28:02.792 "name": "BaseBdev2", 00:28:02.792 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:28:02.792 "is_configured": true, 00:28:02.792 "data_offset": 256, 00:28:02.792 "data_size": 7936 00:28:02.792 } 00:28:02.792 ] 00:28:02.792 }' 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.792 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:03.049 "name": "raid_bdev1", 00:28:03.049 "uuid": "9c4bba22-1855-4713-acc2-3ed135fa000d", 00:28:03.049 "strip_size_kb": 0, 00:28:03.049 "state": "online", 00:28:03.049 "raid_level": "raid1", 00:28:03.049 "superblock": true, 00:28:03.049 "num_base_bdevs": 2, 00:28:03.049 "num_base_bdevs_discovered": 1, 00:28:03.049 "num_base_bdevs_operational": 1, 00:28:03.049 "base_bdevs_list": [ 00:28:03.049 { 00:28:03.049 "name": null, 00:28:03.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.049 "is_configured": false, 00:28:03.049 "data_offset": 0, 00:28:03.049 "data_size": 7936 00:28:03.049 }, 00:28:03.049 { 00:28:03.049 "name": "BaseBdev2", 00:28:03.049 "uuid": "b0a47ee0-d8ad-56f3-832c-518b24df70a1", 00:28:03.049 "is_configured": true, 00:28:03.049 "data_offset": 256, 00:28:03.049 "data_size": 7936 00:28:03.049 } 00:28:03.049 ] 00:28:03.049 }' 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 85935 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 85935 ']' 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 85935 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:03.049 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85935 00:28:03.360 killing process with pid 85935 00:28:03.360 Received shutdown signal, test time was about 60.000000 seconds 00:28:03.360 00:28:03.360 Latency(us) 00:28:03.360 [2024-11-08T17:15:40.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.360 [2024-11-08T17:15:40.075Z] =================================================================================================================== 00:28:03.360 [2024-11-08T17:15:40.075Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:03.360 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:03.360 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:03.360 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85935' 00:28:03.360 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 85935 00:28:03.360 [2024-11-08 17:15:39.779355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:03.360 17:15:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 85935 00:28:03.360 [2024-11-08 17:15:39.779477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:03.360 [2024-11-08 17:15:39.779524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:03.360 [2024-11-08 17:15:39.779534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:03.360 [2024-11-08 17:15:39.944219] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:03.924 17:15:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:28:03.924 00:28:03.924 real 0m16.847s 00:28:03.924 user 0m21.435s 00:28:03.924 sys 0m1.893s 00:28:03.924 ************************************ 00:28:03.924 END TEST raid_rebuild_test_sb_md_separate 00:28:03.924 ************************************ 00:28:03.924 17:15:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:03.924 17:15:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.924 17:15:40 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:28:03.924 17:15:40 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:28:03.924 17:15:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:03.924 17:15:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:03.924 17:15:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:03.924 ************************************ 00:28:03.924 START TEST raid_state_function_test_sb_md_interleaved 00:28:03.924 ************************************ 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:03.924 Process raid pid: 86599 00:28:03.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=86599 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86599' 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 86599 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 86599 ']' 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.924 17:15:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:04.181 [2024-11-08 17:15:40.649909] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:04.181 [2024-11-08 17:15:40.650039] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.181 [2024-11-08 17:15:40.804259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.438 [2024-11-08 17:15:40.903913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.438 [2024-11-08 17:15:41.025973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:04.438 [2024-11-08 17:15:41.026014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.004 [2024-11-08 17:15:41.453813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:05.004 [2024-11-08 17:15:41.453859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:05.004 [2024-11-08 17:15:41.453868] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:05.004 [2024-11-08 17:15:41.453877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.004 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.005 "name": "Existed_Raid", 00:28:05.005 "uuid": "f4076ee6-487d-460d-8e9b-3c41256e71b9", 00:28:05.005 "strip_size_kb": 0, 00:28:05.005 "state": "configuring", 00:28:05.005 "raid_level": "raid1", 00:28:05.005 "superblock": true, 00:28:05.005 "num_base_bdevs": 2, 00:28:05.005 "num_base_bdevs_discovered": 0, 00:28:05.005 "num_base_bdevs_operational": 2, 00:28:05.005 "base_bdevs_list": [ 00:28:05.005 { 00:28:05.005 "name": "BaseBdev1", 00:28:05.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.005 "is_configured": false, 00:28:05.005 "data_offset": 0, 00:28:05.005 "data_size": 0 00:28:05.005 }, 00:28:05.005 { 00:28:05.005 "name": "BaseBdev2", 00:28:05.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.005 "is_configured": false, 00:28:05.005 "data_offset": 0, 00:28:05.005 "data_size": 0 00:28:05.005 } 00:28:05.005 ] 00:28:05.005 }' 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.005 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.301 [2024-11-08 17:15:41.749837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:05.301 [2024-11-08 17:15:41.749869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.301 [2024-11-08 17:15:41.757827] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:05.301 [2024-11-08 17:15:41.757858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:05.301 [2024-11-08 17:15:41.757865] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:05.301 [2024-11-08 17:15:41.757875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.301 [2024-11-08 17:15:41.787596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:05.301 BaseBdev1 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.301 [ 00:28:05.301 { 00:28:05.301 "name": "BaseBdev1", 00:28:05.301 "aliases": [ 00:28:05.301 "883c6297-9381-442b-9f0e-728a75e24a7c" 00:28:05.301 ], 00:28:05.301 "product_name": "Malloc disk", 00:28:05.301 "block_size": 4128, 00:28:05.301 "num_blocks": 8192, 00:28:05.301 "uuid": "883c6297-9381-442b-9f0e-728a75e24a7c", 00:28:05.301 "md_size": 32, 00:28:05.301 "md_interleave": true, 00:28:05.301 "dif_type": 0, 00:28:05.301 "assigned_rate_limits": { 00:28:05.301 "rw_ios_per_sec": 0, 00:28:05.301 "rw_mbytes_per_sec": 0, 00:28:05.301 "r_mbytes_per_sec": 0, 00:28:05.301 "w_mbytes_per_sec": 0 00:28:05.301 }, 00:28:05.301 "claimed": true, 00:28:05.301 "claim_type": "exclusive_write", 00:28:05.301 "zoned": false, 00:28:05.301 "supported_io_types": { 00:28:05.301 "read": true, 00:28:05.301 "write": true, 00:28:05.301 "unmap": true, 00:28:05.301 "flush": true, 00:28:05.301 "reset": true, 00:28:05.301 "nvme_admin": false, 00:28:05.301 "nvme_io": false, 00:28:05.301 "nvme_io_md": false, 00:28:05.301 "write_zeroes": true, 00:28:05.301 "zcopy": true, 00:28:05.301 "get_zone_info": false, 00:28:05.301 "zone_management": false, 00:28:05.301 "zone_append": false, 00:28:05.301 "compare": false, 00:28:05.301 "compare_and_write": false, 00:28:05.301 "abort": true, 00:28:05.301 "seek_hole": false, 00:28:05.301 "seek_data": false, 00:28:05.301 "copy": true, 00:28:05.301 "nvme_iov_md": false 00:28:05.301 }, 00:28:05.301 "memory_domains": [ 00:28:05.301 { 00:28:05.301 "dma_device_id": "system", 00:28:05.301 "dma_device_type": 1 00:28:05.301 }, 00:28:05.301 { 00:28:05.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.301 "dma_device_type": 2 00:28:05.301 } 00:28:05.301 ], 00:28:05.301 "driver_specific": {} 00:28:05.301 } 00:28:05.301 ] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.301 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.301 "name": "Existed_Raid", 00:28:05.301 "uuid": "98db1d2e-75b3-4be0-8fde-853ced752247", 00:28:05.301 "strip_size_kb": 0, 00:28:05.301 "state": "configuring", 00:28:05.301 "raid_level": "raid1", 00:28:05.301 "superblock": true, 00:28:05.301 "num_base_bdevs": 2, 00:28:05.301 "num_base_bdevs_discovered": 1, 00:28:05.301 "num_base_bdevs_operational": 2, 00:28:05.301 "base_bdevs_list": [ 00:28:05.301 { 00:28:05.301 "name": "BaseBdev1", 00:28:05.301 "uuid": "883c6297-9381-442b-9f0e-728a75e24a7c", 00:28:05.301 "is_configured": true, 00:28:05.301 "data_offset": 256, 00:28:05.301 "data_size": 7936 00:28:05.301 }, 00:28:05.301 { 00:28:05.301 "name": "BaseBdev2", 00:28:05.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.302 "is_configured": false, 00:28:05.302 "data_offset": 0, 00:28:05.302 "data_size": 0 00:28:05.302 } 00:28:05.302 ] 00:28:05.302 }' 00:28:05.302 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.302 17:15:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.582 [2024-11-08 17:15:42.107723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:05.582 [2024-11-08 17:15:42.107786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.582 [2024-11-08 17:15:42.115786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:05.582 [2024-11-08 17:15:42.117458] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:05.582 [2024-11-08 17:15:42.117493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.582 "name": "Existed_Raid", 00:28:05.582 "uuid": "2a5cd9a5-bcb2-4596-ad4b-51bf0ba6ed3b", 00:28:05.582 "strip_size_kb": 0, 00:28:05.582 "state": "configuring", 00:28:05.582 "raid_level": "raid1", 00:28:05.582 "superblock": true, 00:28:05.582 "num_base_bdevs": 2, 00:28:05.582 "num_base_bdevs_discovered": 1, 00:28:05.582 "num_base_bdevs_operational": 2, 00:28:05.582 "base_bdevs_list": [ 00:28:05.582 { 00:28:05.582 "name": "BaseBdev1", 00:28:05.582 "uuid": "883c6297-9381-442b-9f0e-728a75e24a7c", 00:28:05.582 "is_configured": true, 00:28:05.582 "data_offset": 256, 00:28:05.582 "data_size": 7936 00:28:05.582 }, 00:28:05.582 { 00:28:05.582 "name": "BaseBdev2", 00:28:05.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.582 "is_configured": false, 00:28:05.582 "data_offset": 0, 00:28:05.582 "data_size": 0 00:28:05.582 } 00:28:05.582 ] 00:28:05.582 }' 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.582 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.840 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.841 [2024-11-08 17:15:42.452065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:05.841 [2024-11-08 17:15:42.452240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:05.841 [2024-11-08 17:15:42.452251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:05.841 [2024-11-08 17:15:42.452325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:05.841 [2024-11-08 17:15:42.452387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:05.841 [2024-11-08 17:15:42.452397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:05.841 [2024-11-08 17:15:42.452450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:05.841 BaseBdev2 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.841 [ 00:28:05.841 { 00:28:05.841 "name": "BaseBdev2", 00:28:05.841 "aliases": [ 00:28:05.841 "c7d9709e-d8e4-4f63-984a-d8be65630d4c" 00:28:05.841 ], 00:28:05.841 "product_name": "Malloc disk", 00:28:05.841 "block_size": 4128, 00:28:05.841 "num_blocks": 8192, 00:28:05.841 "uuid": "c7d9709e-d8e4-4f63-984a-d8be65630d4c", 00:28:05.841 "md_size": 32, 00:28:05.841 "md_interleave": true, 00:28:05.841 "dif_type": 0, 00:28:05.841 "assigned_rate_limits": { 00:28:05.841 "rw_ios_per_sec": 0, 00:28:05.841 "rw_mbytes_per_sec": 0, 00:28:05.841 "r_mbytes_per_sec": 0, 00:28:05.841 "w_mbytes_per_sec": 0 00:28:05.841 }, 00:28:05.841 "claimed": true, 00:28:05.841 "claim_type": "exclusive_write", 00:28:05.841 "zoned": false, 00:28:05.841 "supported_io_types": { 00:28:05.841 "read": true, 00:28:05.841 "write": true, 00:28:05.841 "unmap": true, 00:28:05.841 "flush": true, 00:28:05.841 "reset": true, 00:28:05.841 "nvme_admin": false, 00:28:05.841 "nvme_io": false, 00:28:05.841 "nvme_io_md": false, 00:28:05.841 "write_zeroes": true, 00:28:05.841 "zcopy": true, 00:28:05.841 "get_zone_info": false, 00:28:05.841 "zone_management": false, 00:28:05.841 "zone_append": false, 00:28:05.841 "compare": false, 00:28:05.841 "compare_and_write": false, 00:28:05.841 "abort": true, 00:28:05.841 "seek_hole": false, 00:28:05.841 "seek_data": false, 00:28:05.841 "copy": true, 00:28:05.841 "nvme_iov_md": false 00:28:05.841 }, 00:28:05.841 "memory_domains": [ 00:28:05.841 { 00:28:05.841 "dma_device_id": "system", 00:28:05.841 "dma_device_type": 1 00:28:05.841 }, 00:28:05.841 { 00:28:05.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.841 "dma_device_type": 2 00:28:05.841 } 00:28:05.841 ], 00:28:05.841 "driver_specific": {} 00:28:05.841 } 00:28:05.841 ] 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.841 "name": "Existed_Raid", 00:28:05.841 "uuid": "2a5cd9a5-bcb2-4596-ad4b-51bf0ba6ed3b", 00:28:05.841 "strip_size_kb": 0, 00:28:05.841 "state": "online", 00:28:05.841 "raid_level": "raid1", 00:28:05.841 "superblock": true, 00:28:05.841 "num_base_bdevs": 2, 00:28:05.841 "num_base_bdevs_discovered": 2, 00:28:05.841 "num_base_bdevs_operational": 2, 00:28:05.841 "base_bdevs_list": [ 00:28:05.841 { 00:28:05.841 "name": "BaseBdev1", 00:28:05.841 "uuid": "883c6297-9381-442b-9f0e-728a75e24a7c", 00:28:05.841 "is_configured": true, 00:28:05.841 "data_offset": 256, 00:28:05.841 "data_size": 7936 00:28:05.841 }, 00:28:05.841 { 00:28:05.841 "name": "BaseBdev2", 00:28:05.841 "uuid": "c7d9709e-d8e4-4f63-984a-d8be65630d4c", 00:28:05.841 "is_configured": true, 00:28:05.841 "data_offset": 256, 00:28:05.841 "data_size": 7936 00:28:05.841 } 00:28:05.841 ] 00:28:05.841 }' 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.841 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.099 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:06.099 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:06.099 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:06.099 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:06.099 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:28:06.099 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:06.099 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.358 [2024-11-08 17:15:42.816483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:06.358 "name": "Existed_Raid", 00:28:06.358 "aliases": [ 00:28:06.358 "2a5cd9a5-bcb2-4596-ad4b-51bf0ba6ed3b" 00:28:06.358 ], 00:28:06.358 "product_name": "Raid Volume", 00:28:06.358 "block_size": 4128, 00:28:06.358 "num_blocks": 7936, 00:28:06.358 "uuid": "2a5cd9a5-bcb2-4596-ad4b-51bf0ba6ed3b", 00:28:06.358 "md_size": 32, 00:28:06.358 "md_interleave": true, 00:28:06.358 "dif_type": 0, 00:28:06.358 "assigned_rate_limits": { 00:28:06.358 "rw_ios_per_sec": 0, 00:28:06.358 "rw_mbytes_per_sec": 0, 00:28:06.358 "r_mbytes_per_sec": 0, 00:28:06.358 "w_mbytes_per_sec": 0 00:28:06.358 }, 00:28:06.358 "claimed": false, 00:28:06.358 "zoned": false, 00:28:06.358 "supported_io_types": { 00:28:06.358 "read": true, 00:28:06.358 "write": true, 00:28:06.358 "unmap": false, 00:28:06.358 "flush": false, 00:28:06.358 "reset": true, 00:28:06.358 "nvme_admin": false, 00:28:06.358 "nvme_io": false, 00:28:06.358 "nvme_io_md": false, 00:28:06.358 "write_zeroes": true, 00:28:06.358 "zcopy": false, 00:28:06.358 "get_zone_info": false, 00:28:06.358 "zone_management": false, 00:28:06.358 "zone_append": false, 00:28:06.358 "compare": false, 00:28:06.358 "compare_and_write": false, 00:28:06.358 "abort": false, 00:28:06.358 "seek_hole": false, 00:28:06.358 "seek_data": false, 00:28:06.358 "copy": false, 00:28:06.358 "nvme_iov_md": false 00:28:06.358 }, 00:28:06.358 "memory_domains": [ 00:28:06.358 { 00:28:06.358 "dma_device_id": "system", 00:28:06.358 "dma_device_type": 1 00:28:06.358 }, 00:28:06.358 { 00:28:06.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.358 "dma_device_type": 2 00:28:06.358 }, 00:28:06.358 { 00:28:06.358 "dma_device_id": "system", 00:28:06.358 "dma_device_type": 1 00:28:06.358 }, 00:28:06.358 { 00:28:06.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.358 "dma_device_type": 2 00:28:06.358 } 00:28:06.358 ], 00:28:06.358 "driver_specific": { 00:28:06.358 "raid": { 00:28:06.358 "uuid": "2a5cd9a5-bcb2-4596-ad4b-51bf0ba6ed3b", 00:28:06.358 "strip_size_kb": 0, 00:28:06.358 "state": "online", 00:28:06.358 "raid_level": "raid1", 00:28:06.358 "superblock": true, 00:28:06.358 "num_base_bdevs": 2, 00:28:06.358 "num_base_bdevs_discovered": 2, 00:28:06.358 "num_base_bdevs_operational": 2, 00:28:06.358 "base_bdevs_list": [ 00:28:06.358 { 00:28:06.358 "name": "BaseBdev1", 00:28:06.358 "uuid": "883c6297-9381-442b-9f0e-728a75e24a7c", 00:28:06.358 "is_configured": true, 00:28:06.358 "data_offset": 256, 00:28:06.358 "data_size": 7936 00:28:06.358 }, 00:28:06.358 { 00:28:06.358 "name": "BaseBdev2", 00:28:06.358 "uuid": "c7d9709e-d8e4-4f63-984a-d8be65630d4c", 00:28:06.358 "is_configured": true, 00:28:06.358 "data_offset": 256, 00:28:06.358 "data_size": 7936 00:28:06.358 } 00:28:06.358 ] 00:28:06.358 } 00:28:06.358 } 00:28:06.358 }' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:06.358 BaseBdev2' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.358 17:15:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.359 [2024-11-08 17:15:42.984266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.359 "name": "Existed_Raid", 00:28:06.359 "uuid": "2a5cd9a5-bcb2-4596-ad4b-51bf0ba6ed3b", 00:28:06.359 "strip_size_kb": 0, 00:28:06.359 "state": "online", 00:28:06.359 "raid_level": "raid1", 00:28:06.359 "superblock": true, 00:28:06.359 "num_base_bdevs": 2, 00:28:06.359 "num_base_bdevs_discovered": 1, 00:28:06.359 "num_base_bdevs_operational": 1, 00:28:06.359 "base_bdevs_list": [ 00:28:06.359 { 00:28:06.359 "name": null, 00:28:06.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.359 "is_configured": false, 00:28:06.359 "data_offset": 0, 00:28:06.359 "data_size": 7936 00:28:06.359 }, 00:28:06.359 { 00:28:06.359 "name": "BaseBdev2", 00:28:06.359 "uuid": "c7d9709e-d8e4-4f63-984a-d8be65630d4c", 00:28:06.359 "is_configured": true, 00:28:06.359 "data_offset": 256, 00:28:06.359 "data_size": 7936 00:28:06.359 } 00:28:06.359 ] 00:28:06.359 }' 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.359 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.925 [2024-11-08 17:15:43.381935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:06.925 [2024-11-08 17:15:43.382038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.925 [2024-11-08 17:15:43.430994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.925 [2024-11-08 17:15:43.431047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.925 [2024-11-08 17:15:43.431057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:06.925 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 86599 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 86599 ']' 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 86599 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86599 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:06.926 killing process with pid 86599 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86599' 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 86599 00:28:06.926 [2024-11-08 17:15:43.490601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:06.926 17:15:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 86599 00:28:06.926 [2024-11-08 17:15:43.499549] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:07.492 17:15:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:28:07.492 00:28:07.492 real 0m3.523s 00:28:07.492 user 0m5.067s 00:28:07.492 sys 0m0.636s 00:28:07.492 17:15:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:07.492 17:15:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.492 ************************************ 00:28:07.492 END TEST raid_state_function_test_sb_md_interleaved 00:28:07.492 ************************************ 00:28:07.492 17:15:44 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:28:07.492 17:15:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:07.492 17:15:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:07.492 17:15:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:07.492 ************************************ 00:28:07.492 START TEST raid_superblock_test_md_interleaved 00:28:07.492 ************************************ 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=86836 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 86836 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 86836 ']' 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:07.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:07.492 17:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.492 [2024-11-08 17:15:44.198586] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:07.492 [2024-11-08 17:15:44.198773] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86836 ] 00:28:07.750 [2024-11-08 17:15:44.350283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.750 [2024-11-08 17:15:44.449705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.050 [2024-11-08 17:15:44.571245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:08.050 [2024-11-08 17:15:44.571293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:08.617 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.618 malloc1 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.618 [2024-11-08 17:15:45.088323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:08.618 [2024-11-08 17:15:45.088381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.618 [2024-11-08 17:15:45.088404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:08.618 [2024-11-08 17:15:45.088414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.618 [2024-11-08 17:15:45.090136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.618 [2024-11-08 17:15:45.090162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:08.618 pt1 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.618 malloc2 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.618 [2024-11-08 17:15:45.121995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:08.618 [2024-11-08 17:15:45.122037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.618 [2024-11-08 17:15:45.122055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:08.618 [2024-11-08 17:15:45.122063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.618 [2024-11-08 17:15:45.123701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.618 [2024-11-08 17:15:45.123728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:08.618 pt2 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.618 [2024-11-08 17:15:45.130030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:08.618 [2024-11-08 17:15:45.131692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:08.618 [2024-11-08 17:15:45.131862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:08.618 [2024-11-08 17:15:45.131878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:08.618 [2024-11-08 17:15:45.131944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:08.618 [2024-11-08 17:15:45.132003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:08.618 [2024-11-08 17:15:45.132013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:08.618 [2024-11-08 17:15:45.132069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.618 "name": "raid_bdev1", 00:28:08.618 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:08.618 "strip_size_kb": 0, 00:28:08.618 "state": "online", 00:28:08.618 "raid_level": "raid1", 00:28:08.618 "superblock": true, 00:28:08.618 "num_base_bdevs": 2, 00:28:08.618 "num_base_bdevs_discovered": 2, 00:28:08.618 "num_base_bdevs_operational": 2, 00:28:08.618 "base_bdevs_list": [ 00:28:08.618 { 00:28:08.618 "name": "pt1", 00:28:08.618 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:08.618 "is_configured": true, 00:28:08.618 "data_offset": 256, 00:28:08.618 "data_size": 7936 00:28:08.618 }, 00:28:08.618 { 00:28:08.618 "name": "pt2", 00:28:08.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:08.618 "is_configured": true, 00:28:08.618 "data_offset": 256, 00:28:08.618 "data_size": 7936 00:28:08.618 } 00:28:08.618 ] 00:28:08.618 }' 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.618 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.877 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:08.878 [2024-11-08 17:15:45.446379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:08.878 "name": "raid_bdev1", 00:28:08.878 "aliases": [ 00:28:08.878 "4ec27895-3164-4e9e-ae66-4a0a30185031" 00:28:08.878 ], 00:28:08.878 "product_name": "Raid Volume", 00:28:08.878 "block_size": 4128, 00:28:08.878 "num_blocks": 7936, 00:28:08.878 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:08.878 "md_size": 32, 00:28:08.878 "md_interleave": true, 00:28:08.878 "dif_type": 0, 00:28:08.878 "assigned_rate_limits": { 00:28:08.878 "rw_ios_per_sec": 0, 00:28:08.878 "rw_mbytes_per_sec": 0, 00:28:08.878 "r_mbytes_per_sec": 0, 00:28:08.878 "w_mbytes_per_sec": 0 00:28:08.878 }, 00:28:08.878 "claimed": false, 00:28:08.878 "zoned": false, 00:28:08.878 "supported_io_types": { 00:28:08.878 "read": true, 00:28:08.878 "write": true, 00:28:08.878 "unmap": false, 00:28:08.878 "flush": false, 00:28:08.878 "reset": true, 00:28:08.878 "nvme_admin": false, 00:28:08.878 "nvme_io": false, 00:28:08.878 "nvme_io_md": false, 00:28:08.878 "write_zeroes": true, 00:28:08.878 "zcopy": false, 00:28:08.878 "get_zone_info": false, 00:28:08.878 "zone_management": false, 00:28:08.878 "zone_append": false, 00:28:08.878 "compare": false, 00:28:08.878 "compare_and_write": false, 00:28:08.878 "abort": false, 00:28:08.878 "seek_hole": false, 00:28:08.878 "seek_data": false, 00:28:08.878 "copy": false, 00:28:08.878 "nvme_iov_md": false 00:28:08.878 }, 00:28:08.878 "memory_domains": [ 00:28:08.878 { 00:28:08.878 "dma_device_id": "system", 00:28:08.878 "dma_device_type": 1 00:28:08.878 }, 00:28:08.878 { 00:28:08.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.878 "dma_device_type": 2 00:28:08.878 }, 00:28:08.878 { 00:28:08.878 "dma_device_id": "system", 00:28:08.878 "dma_device_type": 1 00:28:08.878 }, 00:28:08.878 { 00:28:08.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.878 "dma_device_type": 2 00:28:08.878 } 00:28:08.878 ], 00:28:08.878 "driver_specific": { 00:28:08.878 "raid": { 00:28:08.878 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:08.878 "strip_size_kb": 0, 00:28:08.878 "state": "online", 00:28:08.878 "raid_level": "raid1", 00:28:08.878 "superblock": true, 00:28:08.878 "num_base_bdevs": 2, 00:28:08.878 "num_base_bdevs_discovered": 2, 00:28:08.878 "num_base_bdevs_operational": 2, 00:28:08.878 "base_bdevs_list": [ 00:28:08.878 { 00:28:08.878 "name": "pt1", 00:28:08.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:08.878 "is_configured": true, 00:28:08.878 "data_offset": 256, 00:28:08.878 "data_size": 7936 00:28:08.878 }, 00:28:08.878 { 00:28:08.878 "name": "pt2", 00:28:08.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:08.878 "is_configured": true, 00:28:08.878 "data_offset": 256, 00:28:08.878 "data_size": 7936 00:28:08.878 } 00:28:08.878 ] 00:28:08.878 } 00:28:08.878 } 00:28:08.878 }' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:08.878 pt2' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.878 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:09.138 [2024-11-08 17:15:45.602332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4ec27895-3164-4e9e-ae66-4a0a30185031 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 4ec27895-3164-4e9e-ae66-4a0a30185031 ']' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 [2024-11-08 17:15:45.634065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:09.138 [2024-11-08 17:15:45.634089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:09.138 [2024-11-08 17:15:45.634170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:09.138 [2024-11-08 17:15:45.634227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:09.138 [2024-11-08 17:15:45.634238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 [2024-11-08 17:15:45.730117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:09.138 [2024-11-08 17:15:45.731801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:09.138 [2024-11-08 17:15:45.731871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:09.138 [2024-11-08 17:15:45.731919] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:09.138 [2024-11-08 17:15:45.731931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:09.138 [2024-11-08 17:15:45.731940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:28:09.138 request: 00:28:09.138 { 00:28:09.138 "name": "raid_bdev1", 00:28:09.138 "raid_level": "raid1", 00:28:09.138 "base_bdevs": [ 00:28:09.138 "malloc1", 00:28:09.138 "malloc2" 00:28:09.138 ], 00:28:09.138 "superblock": false, 00:28:09.138 "method": "bdev_raid_create", 00:28:09.138 "req_id": 1 00:28:09.138 } 00:28:09.138 Got JSON-RPC error response 00:28:09.138 response: 00:28:09.138 { 00:28:09.138 "code": -17, 00:28:09.138 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:09.138 } 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.138 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.138 [2024-11-08 17:15:45.774100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:09.138 [2024-11-08 17:15:45.774151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.138 [2024-11-08 17:15:45.774165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:09.139 [2024-11-08 17:15:45.774174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.139 [2024-11-08 17:15:45.775902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.139 [2024-11-08 17:15:45.775934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:09.139 [2024-11-08 17:15:45.775981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:09.139 [2024-11-08 17:15:45.776032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:09.139 pt1 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.139 "name": "raid_bdev1", 00:28:09.139 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:09.139 "strip_size_kb": 0, 00:28:09.139 "state": "configuring", 00:28:09.139 "raid_level": "raid1", 00:28:09.139 "superblock": true, 00:28:09.139 "num_base_bdevs": 2, 00:28:09.139 "num_base_bdevs_discovered": 1, 00:28:09.139 "num_base_bdevs_operational": 2, 00:28:09.139 "base_bdevs_list": [ 00:28:09.139 { 00:28:09.139 "name": "pt1", 00:28:09.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:09.139 "is_configured": true, 00:28:09.139 "data_offset": 256, 00:28:09.139 "data_size": 7936 00:28:09.139 }, 00:28:09.139 { 00:28:09.139 "name": null, 00:28:09.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:09.139 "is_configured": false, 00:28:09.139 "data_offset": 256, 00:28:09.139 "data_size": 7936 00:28:09.139 } 00:28:09.139 ] 00:28:09.139 }' 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.139 17:15:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.398 [2024-11-08 17:15:46.102197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:09.398 [2024-11-08 17:15:46.102265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.398 [2024-11-08 17:15:46.102284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:09.398 [2024-11-08 17:15:46.102294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.398 [2024-11-08 17:15:46.102460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.398 [2024-11-08 17:15:46.102476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:09.398 [2024-11-08 17:15:46.102524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:09.398 [2024-11-08 17:15:46.102545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:09.398 [2024-11-08 17:15:46.102625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:09.398 [2024-11-08 17:15:46.102639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:09.398 [2024-11-08 17:15:46.102716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:09.398 [2024-11-08 17:15:46.102792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:09.398 [2024-11-08 17:15:46.102799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:09.398 [2024-11-08 17:15:46.102858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.398 pt2 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.398 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.657 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.657 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.657 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.657 "name": "raid_bdev1", 00:28:09.657 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:09.658 "strip_size_kb": 0, 00:28:09.658 "state": "online", 00:28:09.658 "raid_level": "raid1", 00:28:09.658 "superblock": true, 00:28:09.658 "num_base_bdevs": 2, 00:28:09.658 "num_base_bdevs_discovered": 2, 00:28:09.658 "num_base_bdevs_operational": 2, 00:28:09.658 "base_bdevs_list": [ 00:28:09.658 { 00:28:09.658 "name": "pt1", 00:28:09.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:09.658 "is_configured": true, 00:28:09.658 "data_offset": 256, 00:28:09.658 "data_size": 7936 00:28:09.658 }, 00:28:09.658 { 00:28:09.658 "name": "pt2", 00:28:09.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:09.658 "is_configured": true, 00:28:09.658 "data_offset": 256, 00:28:09.658 "data_size": 7936 00:28:09.658 } 00:28:09.658 ] 00:28:09.658 }' 00:28:09.658 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.658 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:09.917 [2024-11-08 17:15:46.426525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:09.917 "name": "raid_bdev1", 00:28:09.917 "aliases": [ 00:28:09.917 "4ec27895-3164-4e9e-ae66-4a0a30185031" 00:28:09.917 ], 00:28:09.917 "product_name": "Raid Volume", 00:28:09.917 "block_size": 4128, 00:28:09.917 "num_blocks": 7936, 00:28:09.917 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:09.917 "md_size": 32, 00:28:09.917 "md_interleave": true, 00:28:09.917 "dif_type": 0, 00:28:09.917 "assigned_rate_limits": { 00:28:09.917 "rw_ios_per_sec": 0, 00:28:09.917 "rw_mbytes_per_sec": 0, 00:28:09.917 "r_mbytes_per_sec": 0, 00:28:09.917 "w_mbytes_per_sec": 0 00:28:09.917 }, 00:28:09.917 "claimed": false, 00:28:09.917 "zoned": false, 00:28:09.917 "supported_io_types": { 00:28:09.917 "read": true, 00:28:09.917 "write": true, 00:28:09.917 "unmap": false, 00:28:09.917 "flush": false, 00:28:09.917 "reset": true, 00:28:09.917 "nvme_admin": false, 00:28:09.917 "nvme_io": false, 00:28:09.917 "nvme_io_md": false, 00:28:09.917 "write_zeroes": true, 00:28:09.917 "zcopy": false, 00:28:09.917 "get_zone_info": false, 00:28:09.917 "zone_management": false, 00:28:09.917 "zone_append": false, 00:28:09.917 "compare": false, 00:28:09.917 "compare_and_write": false, 00:28:09.917 "abort": false, 00:28:09.917 "seek_hole": false, 00:28:09.917 "seek_data": false, 00:28:09.917 "copy": false, 00:28:09.917 "nvme_iov_md": false 00:28:09.917 }, 00:28:09.917 "memory_domains": [ 00:28:09.917 { 00:28:09.917 "dma_device_id": "system", 00:28:09.917 "dma_device_type": 1 00:28:09.917 }, 00:28:09.917 { 00:28:09.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.917 "dma_device_type": 2 00:28:09.917 }, 00:28:09.917 { 00:28:09.917 "dma_device_id": "system", 00:28:09.917 "dma_device_type": 1 00:28:09.917 }, 00:28:09.917 { 00:28:09.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.917 "dma_device_type": 2 00:28:09.917 } 00:28:09.917 ], 00:28:09.917 "driver_specific": { 00:28:09.917 "raid": { 00:28:09.917 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:09.917 "strip_size_kb": 0, 00:28:09.917 "state": "online", 00:28:09.917 "raid_level": "raid1", 00:28:09.917 "superblock": true, 00:28:09.917 "num_base_bdevs": 2, 00:28:09.917 "num_base_bdevs_discovered": 2, 00:28:09.917 "num_base_bdevs_operational": 2, 00:28:09.917 "base_bdevs_list": [ 00:28:09.917 { 00:28:09.917 "name": "pt1", 00:28:09.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:09.917 "is_configured": true, 00:28:09.917 "data_offset": 256, 00:28:09.917 "data_size": 7936 00:28:09.917 }, 00:28:09.917 { 00:28:09.917 "name": "pt2", 00:28:09.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:09.917 "is_configured": true, 00:28:09.917 "data_offset": 256, 00:28:09.917 "data_size": 7936 00:28:09.917 } 00:28:09.917 ] 00:28:09.917 } 00:28:09.917 } 00:28:09.917 }' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:09.917 pt2' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:09.917 [2024-11-08 17:15:46.582538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 4ec27895-3164-4e9e-ae66-4a0a30185031 '!=' 4ec27895-3164-4e9e-ae66-4a0a30185031 ']' 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.917 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.917 [2024-11-08 17:15:46.614333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.918 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.177 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.177 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.177 "name": "raid_bdev1", 00:28:10.177 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:10.177 "strip_size_kb": 0, 00:28:10.177 "state": "online", 00:28:10.177 "raid_level": "raid1", 00:28:10.177 "superblock": true, 00:28:10.177 "num_base_bdevs": 2, 00:28:10.177 "num_base_bdevs_discovered": 1, 00:28:10.177 "num_base_bdevs_operational": 1, 00:28:10.177 "base_bdevs_list": [ 00:28:10.177 { 00:28:10.177 "name": null, 00:28:10.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.177 "is_configured": false, 00:28:10.177 "data_offset": 0, 00:28:10.177 "data_size": 7936 00:28:10.177 }, 00:28:10.177 { 00:28:10.177 "name": "pt2", 00:28:10.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:10.177 "is_configured": true, 00:28:10.177 "data_offset": 256, 00:28:10.177 "data_size": 7936 00:28:10.177 } 00:28:10.177 ] 00:28:10.177 }' 00:28:10.177 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.177 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.436 [2024-11-08 17:15:46.922364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:10.436 [2024-11-08 17:15:46.922393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:10.436 [2024-11-08 17:15:46.922468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:10.436 [2024-11-08 17:15:46.922512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:10.436 [2024-11-08 17:15:46.922522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:10.436 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.437 [2024-11-08 17:15:46.970358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:10.437 [2024-11-08 17:15:46.970412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.437 [2024-11-08 17:15:46.970427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:10.437 [2024-11-08 17:15:46.970437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.437 [2024-11-08 17:15:46.972231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.437 [2024-11-08 17:15:46.972265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:10.437 [2024-11-08 17:15:46.972314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:10.437 [2024-11-08 17:15:46.972357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:10.437 [2024-11-08 17:15:46.972417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:10.437 [2024-11-08 17:15:46.972435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:10.437 [2024-11-08 17:15:46.972518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:10.437 [2024-11-08 17:15:46.972577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:10.437 [2024-11-08 17:15:46.972587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:10.437 [2024-11-08 17:15:46.972643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.437 pt2 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.437 17:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.437 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.437 "name": "raid_bdev1", 00:28:10.437 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:10.437 "strip_size_kb": 0, 00:28:10.437 "state": "online", 00:28:10.437 "raid_level": "raid1", 00:28:10.437 "superblock": true, 00:28:10.437 "num_base_bdevs": 2, 00:28:10.437 "num_base_bdevs_discovered": 1, 00:28:10.437 "num_base_bdevs_operational": 1, 00:28:10.437 "base_bdevs_list": [ 00:28:10.437 { 00:28:10.437 "name": null, 00:28:10.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.437 "is_configured": false, 00:28:10.437 "data_offset": 256, 00:28:10.437 "data_size": 7936 00:28:10.437 }, 00:28:10.437 { 00:28:10.437 "name": "pt2", 00:28:10.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:10.437 "is_configured": true, 00:28:10.437 "data_offset": 256, 00:28:10.437 "data_size": 7936 00:28:10.437 } 00:28:10.437 ] 00:28:10.437 }' 00:28:10.437 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.437 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.696 [2024-11-08 17:15:47.294419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:10.696 [2024-11-08 17:15:47.294447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:10.696 [2024-11-08 17:15:47.294516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:10.696 [2024-11-08 17:15:47.294563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:10.696 [2024-11-08 17:15:47.294572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.696 [2024-11-08 17:15:47.330440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:10.696 [2024-11-08 17:15:47.330502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.696 [2024-11-08 17:15:47.330521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:28:10.696 [2024-11-08 17:15:47.330529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.696 [2024-11-08 17:15:47.332284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.696 [2024-11-08 17:15:47.332309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:10.696 [2024-11-08 17:15:47.332361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:10.696 [2024-11-08 17:15:47.332401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:10.696 [2024-11-08 17:15:47.332492] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:10.696 [2024-11-08 17:15:47.332506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:10.696 [2024-11-08 17:15:47.332524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:28:10.696 [2024-11-08 17:15:47.332567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:10.696 [2024-11-08 17:15:47.332628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:28:10.696 [2024-11-08 17:15:47.332640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:10.696 [2024-11-08 17:15:47.332697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:10.696 [2024-11-08 17:15:47.332762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:28:10.696 [2024-11-08 17:15:47.332773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:28:10.696 [2024-11-08 17:15:47.332837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.696 pt1 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.696 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.697 "name": "raid_bdev1", 00:28:10.697 "uuid": "4ec27895-3164-4e9e-ae66-4a0a30185031", 00:28:10.697 "strip_size_kb": 0, 00:28:10.697 "state": "online", 00:28:10.697 "raid_level": "raid1", 00:28:10.697 "superblock": true, 00:28:10.697 "num_base_bdevs": 2, 00:28:10.697 "num_base_bdevs_discovered": 1, 00:28:10.697 "num_base_bdevs_operational": 1, 00:28:10.697 "base_bdevs_list": [ 00:28:10.697 { 00:28:10.697 "name": null, 00:28:10.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.697 "is_configured": false, 00:28:10.697 "data_offset": 256, 00:28:10.697 "data_size": 7936 00:28:10.697 }, 00:28:10.697 { 00:28:10.697 "name": "pt2", 00:28:10.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:10.697 "is_configured": true, 00:28:10.697 "data_offset": 256, 00:28:10.697 "data_size": 7936 00:28:10.697 } 00:28:10.697 ] 00:28:10.697 }' 00:28:10.697 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.697 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:10.955 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.955 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:10.955 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.955 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:28:11.213 [2024-11-08 17:15:47.686743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 4ec27895-3164-4e9e-ae66-4a0a30185031 '!=' 4ec27895-3164-4e9e-ae66-4a0a30185031 ']' 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 86836 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 86836 ']' 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 86836 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86836 00:28:11.213 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:11.214 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:11.214 killing process with pid 86836 00:28:11.214 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86836' 00:28:11.214 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 86836 00:28:11.214 [2024-11-08 17:15:47.741403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:11.214 17:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 86836 00:28:11.214 [2024-11-08 17:15:47.741497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:11.214 [2024-11-08 17:15:47.741542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:11.214 [2024-11-08 17:15:47.741555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:28:11.214 [2024-11-08 17:15:47.847543] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:11.780 17:15:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:28:11.780 00:28:11.780 real 0m4.301s 00:28:11.780 user 0m6.578s 00:28:11.780 sys 0m0.731s 00:28:11.780 17:15:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:11.780 17:15:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.780 ************************************ 00:28:11.780 END TEST raid_superblock_test_md_interleaved 00:28:11.780 ************************************ 00:28:11.780 17:15:48 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:11.780 17:15:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:28:11.780 17:15:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:11.780 17:15:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:11.780 ************************************ 00:28:11.780 START TEST raid_rebuild_test_sb_md_interleaved 00:28:11.780 ************************************ 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:11.780 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=87144 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 87144 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 87144 ']' 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:11.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.781 17:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:12.039 [2024-11-08 17:15:48.555469] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:12.039 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:12.039 Zero copy mechanism will not be used. 00:28:12.039 [2024-11-08 17:15:48.555595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87144 ] 00:28:12.039 [2024-11-08 17:15:48.715480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.297 [2024-11-08 17:15:48.833006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.297 [2024-11-08 17:15:48.983571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:12.297 [2024-11-08 17:15:48.983620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.886 BaseBdev1_malloc 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.886 [2024-11-08 17:15:49.476600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:12.886 [2024-11-08 17:15:49.476670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.886 [2024-11-08 17:15:49.476691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:12.886 [2024-11-08 17:15:49.476703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.886 [2024-11-08 17:15:49.478716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.886 [2024-11-08 17:15:49.478765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:12.886 BaseBdev1 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.886 BaseBdev2_malloc 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.886 [2024-11-08 17:15:49.514352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:12.886 [2024-11-08 17:15:49.514413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.886 [2024-11-08 17:15:49.514430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:12.886 [2024-11-08 17:15:49.514443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.886 [2024-11-08 17:15:49.516407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.886 [2024-11-08 17:15:49.516445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:12.886 BaseBdev2 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.886 spare_malloc 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.886 spare_delay 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.886 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.886 [2024-11-08 17:15:49.573846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:12.886 [2024-11-08 17:15:49.573904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.886 [2024-11-08 17:15:49.573925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:12.886 [2024-11-08 17:15:49.573937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.886 [2024-11-08 17:15:49.575938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.887 [2024-11-08 17:15:49.575973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:12.887 spare 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.887 [2024-11-08 17:15:49.581888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:12.887 [2024-11-08 17:15:49.583842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:12.887 [2024-11-08 17:15:49.584029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:12.887 [2024-11-08 17:15:49.584049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:12.887 [2024-11-08 17:15:49.584132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:12.887 [2024-11-08 17:15:49.584204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:12.887 [2024-11-08 17:15:49.584217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:12.887 [2024-11-08 17:15:49.584285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.887 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.146 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.146 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.146 "name": "raid_bdev1", 00:28:13.146 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:13.146 "strip_size_kb": 0, 00:28:13.146 "state": "online", 00:28:13.146 "raid_level": "raid1", 00:28:13.146 "superblock": true, 00:28:13.146 "num_base_bdevs": 2, 00:28:13.146 "num_base_bdevs_discovered": 2, 00:28:13.146 "num_base_bdevs_operational": 2, 00:28:13.146 "base_bdevs_list": [ 00:28:13.146 { 00:28:13.146 "name": "BaseBdev1", 00:28:13.146 "uuid": "375294b8-99d7-5bf7-9423-c7d712b08a8f", 00:28:13.146 "is_configured": true, 00:28:13.146 "data_offset": 256, 00:28:13.146 "data_size": 7936 00:28:13.146 }, 00:28:13.146 { 00:28:13.146 "name": "BaseBdev2", 00:28:13.146 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:13.146 "is_configured": true, 00:28:13.146 "data_offset": 256, 00:28:13.146 "data_size": 7936 00:28:13.146 } 00:28:13.146 ] 00:28:13.146 }' 00:28:13.146 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.146 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.403 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:13.403 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 [2024-11-08 17:15:49.910281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 [2024-11-08 17:15:49.969941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 17:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.404 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.404 "name": "raid_bdev1", 00:28:13.404 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:13.404 "strip_size_kb": 0, 00:28:13.404 "state": "online", 00:28:13.404 "raid_level": "raid1", 00:28:13.404 "superblock": true, 00:28:13.404 "num_base_bdevs": 2, 00:28:13.404 "num_base_bdevs_discovered": 1, 00:28:13.404 "num_base_bdevs_operational": 1, 00:28:13.404 "base_bdevs_list": [ 00:28:13.404 { 00:28:13.404 "name": null, 00:28:13.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.404 "is_configured": false, 00:28:13.404 "data_offset": 0, 00:28:13.404 "data_size": 7936 00:28:13.404 }, 00:28:13.404 { 00:28:13.404 "name": "BaseBdev2", 00:28:13.404 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:13.404 "is_configured": true, 00:28:13.404 "data_offset": 256, 00:28:13.404 "data_size": 7936 00:28:13.404 } 00:28:13.404 ] 00:28:13.404 }' 00:28:13.404 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.404 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.662 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:13.662 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.662 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.662 [2024-11-08 17:15:50.302068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:13.662 [2024-11-08 17:15:50.314238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:13.662 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.662 17:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:13.662 [2024-11-08 17:15:50.316224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:15.037 "name": "raid_bdev1", 00:28:15.037 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:15.037 "strip_size_kb": 0, 00:28:15.037 "state": "online", 00:28:15.037 "raid_level": "raid1", 00:28:15.037 "superblock": true, 00:28:15.037 "num_base_bdevs": 2, 00:28:15.037 "num_base_bdevs_discovered": 2, 00:28:15.037 "num_base_bdevs_operational": 2, 00:28:15.037 "process": { 00:28:15.037 "type": "rebuild", 00:28:15.037 "target": "spare", 00:28:15.037 "progress": { 00:28:15.037 "blocks": 2560, 00:28:15.037 "percent": 32 00:28:15.037 } 00:28:15.037 }, 00:28:15.037 "base_bdevs_list": [ 00:28:15.037 { 00:28:15.037 "name": "spare", 00:28:15.037 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:15.037 "is_configured": true, 00:28:15.037 "data_offset": 256, 00:28:15.037 "data_size": 7936 00:28:15.037 }, 00:28:15.037 { 00:28:15.037 "name": "BaseBdev2", 00:28:15.037 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:15.037 "is_configured": true, 00:28:15.037 "data_offset": 256, 00:28:15.037 "data_size": 7936 00:28:15.037 } 00:28:15.037 ] 00:28:15.037 }' 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.037 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.037 [2024-11-08 17:15:51.426487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:15.037 [2024-11-08 17:15:51.523592] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:15.037 [2024-11-08 17:15:51.523678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.037 [2024-11-08 17:15:51.523695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:15.037 [2024-11-08 17:15:51.523709] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.038 "name": "raid_bdev1", 00:28:15.038 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:15.038 "strip_size_kb": 0, 00:28:15.038 "state": "online", 00:28:15.038 "raid_level": "raid1", 00:28:15.038 "superblock": true, 00:28:15.038 "num_base_bdevs": 2, 00:28:15.038 "num_base_bdevs_discovered": 1, 00:28:15.038 "num_base_bdevs_operational": 1, 00:28:15.038 "base_bdevs_list": [ 00:28:15.038 { 00:28:15.038 "name": null, 00:28:15.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.038 "is_configured": false, 00:28:15.038 "data_offset": 0, 00:28:15.038 "data_size": 7936 00:28:15.038 }, 00:28:15.038 { 00:28:15.038 "name": "BaseBdev2", 00:28:15.038 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:15.038 "is_configured": true, 00:28:15.038 "data_offset": 256, 00:28:15.038 "data_size": 7936 00:28:15.038 } 00:28:15.038 ] 00:28:15.038 }' 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.038 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:15.296 "name": "raid_bdev1", 00:28:15.296 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:15.296 "strip_size_kb": 0, 00:28:15.296 "state": "online", 00:28:15.296 "raid_level": "raid1", 00:28:15.296 "superblock": true, 00:28:15.296 "num_base_bdevs": 2, 00:28:15.296 "num_base_bdevs_discovered": 1, 00:28:15.296 "num_base_bdevs_operational": 1, 00:28:15.296 "base_bdevs_list": [ 00:28:15.296 { 00:28:15.296 "name": null, 00:28:15.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.296 "is_configured": false, 00:28:15.296 "data_offset": 0, 00:28:15.296 "data_size": 7936 00:28:15.296 }, 00:28:15.296 { 00:28:15.296 "name": "BaseBdev2", 00:28:15.296 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:15.296 "is_configured": true, 00:28:15.296 "data_offset": 256, 00:28:15.296 "data_size": 7936 00:28:15.296 } 00:28:15.296 ] 00:28:15.296 }' 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.296 [2024-11-08 17:15:51.944283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:15.296 [2024-11-08 17:15:51.956059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.296 17:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:15.296 [2024-11-08 17:15:51.958061] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:16.670 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:16.670 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:16.670 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:16.670 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:16.670 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:16.671 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.671 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.671 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.671 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.671 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.671 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:16.671 "name": "raid_bdev1", 00:28:16.671 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:16.671 "strip_size_kb": 0, 00:28:16.671 "state": "online", 00:28:16.671 "raid_level": "raid1", 00:28:16.671 "superblock": true, 00:28:16.671 "num_base_bdevs": 2, 00:28:16.671 "num_base_bdevs_discovered": 2, 00:28:16.671 "num_base_bdevs_operational": 2, 00:28:16.671 "process": { 00:28:16.671 "type": "rebuild", 00:28:16.671 "target": "spare", 00:28:16.671 "progress": { 00:28:16.671 "blocks": 2560, 00:28:16.671 "percent": 32 00:28:16.671 } 00:28:16.671 }, 00:28:16.671 "base_bdevs_list": [ 00:28:16.671 { 00:28:16.671 "name": "spare", 00:28:16.671 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:16.671 "is_configured": true, 00:28:16.671 "data_offset": 256, 00:28:16.671 "data_size": 7936 00:28:16.671 }, 00:28:16.671 { 00:28:16.671 "name": "BaseBdev2", 00:28:16.671 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:16.671 "is_configured": true, 00:28:16.671 "data_offset": 256, 00:28:16.671 "data_size": 7936 00:28:16.671 } 00:28:16.671 ] 00:28:16.671 }' 00:28:16.671 17:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:16.671 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=641 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:16.671 "name": "raid_bdev1", 00:28:16.671 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:16.671 "strip_size_kb": 0, 00:28:16.671 "state": "online", 00:28:16.671 "raid_level": "raid1", 00:28:16.671 "superblock": true, 00:28:16.671 "num_base_bdevs": 2, 00:28:16.671 "num_base_bdevs_discovered": 2, 00:28:16.671 "num_base_bdevs_operational": 2, 00:28:16.671 "process": { 00:28:16.671 "type": "rebuild", 00:28:16.671 "target": "spare", 00:28:16.671 "progress": { 00:28:16.671 "blocks": 2560, 00:28:16.671 "percent": 32 00:28:16.671 } 00:28:16.671 }, 00:28:16.671 "base_bdevs_list": [ 00:28:16.671 { 00:28:16.671 "name": "spare", 00:28:16.671 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:16.671 "is_configured": true, 00:28:16.671 "data_offset": 256, 00:28:16.671 "data_size": 7936 00:28:16.671 }, 00:28:16.671 { 00:28:16.671 "name": "BaseBdev2", 00:28:16.671 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:16.671 "is_configured": true, 00:28:16.671 "data_offset": 256, 00:28:16.671 "data_size": 7936 00:28:16.671 } 00:28:16.671 ] 00:28:16.671 }' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:16.671 17:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.606 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:17.606 "name": "raid_bdev1", 00:28:17.606 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:17.606 "strip_size_kb": 0, 00:28:17.606 "state": "online", 00:28:17.606 "raid_level": "raid1", 00:28:17.606 "superblock": true, 00:28:17.606 "num_base_bdevs": 2, 00:28:17.606 "num_base_bdevs_discovered": 2, 00:28:17.606 "num_base_bdevs_operational": 2, 00:28:17.606 "process": { 00:28:17.606 "type": "rebuild", 00:28:17.606 "target": "spare", 00:28:17.606 "progress": { 00:28:17.606 "blocks": 5376, 00:28:17.606 "percent": 67 00:28:17.606 } 00:28:17.606 }, 00:28:17.606 "base_bdevs_list": [ 00:28:17.606 { 00:28:17.606 "name": "spare", 00:28:17.606 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:17.606 "is_configured": true, 00:28:17.606 "data_offset": 256, 00:28:17.606 "data_size": 7936 00:28:17.606 }, 00:28:17.606 { 00:28:17.606 "name": "BaseBdev2", 00:28:17.606 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:17.606 "is_configured": true, 00:28:17.606 "data_offset": 256, 00:28:17.607 "data_size": 7936 00:28:17.607 } 00:28:17.607 ] 00:28:17.607 }' 00:28:17.607 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:17.607 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:17.607 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:17.607 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:17.607 17:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:18.576 [2024-11-08 17:15:55.076700] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:18.576 [2024-11-08 17:15:55.076805] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:18.576 [2024-11-08 17:15:55.076930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.576 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.834 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:18.834 "name": "raid_bdev1", 00:28:18.834 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:18.834 "strip_size_kb": 0, 00:28:18.834 "state": "online", 00:28:18.834 "raid_level": "raid1", 00:28:18.834 "superblock": true, 00:28:18.834 "num_base_bdevs": 2, 00:28:18.834 "num_base_bdevs_discovered": 2, 00:28:18.834 "num_base_bdevs_operational": 2, 00:28:18.834 "base_bdevs_list": [ 00:28:18.834 { 00:28:18.834 "name": "spare", 00:28:18.834 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:18.834 "is_configured": true, 00:28:18.834 "data_offset": 256, 00:28:18.834 "data_size": 7936 00:28:18.835 }, 00:28:18.835 { 00:28:18.835 "name": "BaseBdev2", 00:28:18.835 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:18.835 "is_configured": true, 00:28:18.835 "data_offset": 256, 00:28:18.835 "data_size": 7936 00:28:18.835 } 00:28:18.835 ] 00:28:18.835 }' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:18.835 "name": "raid_bdev1", 00:28:18.835 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:18.835 "strip_size_kb": 0, 00:28:18.835 "state": "online", 00:28:18.835 "raid_level": "raid1", 00:28:18.835 "superblock": true, 00:28:18.835 "num_base_bdevs": 2, 00:28:18.835 "num_base_bdevs_discovered": 2, 00:28:18.835 "num_base_bdevs_operational": 2, 00:28:18.835 "base_bdevs_list": [ 00:28:18.835 { 00:28:18.835 "name": "spare", 00:28:18.835 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:18.835 "is_configured": true, 00:28:18.835 "data_offset": 256, 00:28:18.835 "data_size": 7936 00:28:18.835 }, 00:28:18.835 { 00:28:18.835 "name": "BaseBdev2", 00:28:18.835 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:18.835 "is_configured": true, 00:28:18.835 "data_offset": 256, 00:28:18.835 "data_size": 7936 00:28:18.835 } 00:28:18.835 ] 00:28:18.835 }' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:18.835 "name": "raid_bdev1", 00:28:18.835 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:18.835 "strip_size_kb": 0, 00:28:18.835 "state": "online", 00:28:18.835 "raid_level": "raid1", 00:28:18.835 "superblock": true, 00:28:18.835 "num_base_bdevs": 2, 00:28:18.835 "num_base_bdevs_discovered": 2, 00:28:18.835 "num_base_bdevs_operational": 2, 00:28:18.835 "base_bdevs_list": [ 00:28:18.835 { 00:28:18.835 "name": "spare", 00:28:18.835 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:18.835 "is_configured": true, 00:28:18.835 "data_offset": 256, 00:28:18.835 "data_size": 7936 00:28:18.835 }, 00:28:18.835 { 00:28:18.835 "name": "BaseBdev2", 00:28:18.835 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:18.835 "is_configured": true, 00:28:18.835 "data_offset": 256, 00:28:18.835 "data_size": 7936 00:28:18.835 } 00:28:18.835 ] 00:28:18.835 }' 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.835 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.094 [2024-11-08 17:15:55.784883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:19.094 [2024-11-08 17:15:55.784923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:19.094 [2024-11-08 17:15:55.785009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:19.094 [2024-11-08 17:15:55.785087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:19.094 [2024-11-08 17:15:55.785097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.094 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.353 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.353 [2024-11-08 17:15:55.836868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:19.353 [2024-11-08 17:15:55.836929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.353 [2024-11-08 17:15:55.836949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:19.353 [2024-11-08 17:15:55.836958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.353 [2024-11-08 17:15:55.838827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.353 [2024-11-08 17:15:55.838859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:19.353 [2024-11-08 17:15:55.838919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:19.354 [2024-11-08 17:15:55.838973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:19.354 [2024-11-08 17:15:55.839066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:19.354 spare 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.354 [2024-11-08 17:15:55.939166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:19.354 [2024-11-08 17:15:55.939235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:19.354 [2024-11-08 17:15:55.939373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:19.354 [2024-11-08 17:15:55.939479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:19.354 [2024-11-08 17:15:55.939488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:19.354 [2024-11-08 17:15:55.939588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.354 "name": "raid_bdev1", 00:28:19.354 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:19.354 "strip_size_kb": 0, 00:28:19.354 "state": "online", 00:28:19.354 "raid_level": "raid1", 00:28:19.354 "superblock": true, 00:28:19.354 "num_base_bdevs": 2, 00:28:19.354 "num_base_bdevs_discovered": 2, 00:28:19.354 "num_base_bdevs_operational": 2, 00:28:19.354 "base_bdevs_list": [ 00:28:19.354 { 00:28:19.354 "name": "spare", 00:28:19.354 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:19.354 "is_configured": true, 00:28:19.354 "data_offset": 256, 00:28:19.354 "data_size": 7936 00:28:19.354 }, 00:28:19.354 { 00:28:19.354 "name": "BaseBdev2", 00:28:19.354 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:19.354 "is_configured": true, 00:28:19.354 "data_offset": 256, 00:28:19.354 "data_size": 7936 00:28:19.354 } 00:28:19.354 ] 00:28:19.354 }' 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.354 17:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.612 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:19.871 "name": "raid_bdev1", 00:28:19.871 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:19.871 "strip_size_kb": 0, 00:28:19.871 "state": "online", 00:28:19.871 "raid_level": "raid1", 00:28:19.871 "superblock": true, 00:28:19.871 "num_base_bdevs": 2, 00:28:19.871 "num_base_bdevs_discovered": 2, 00:28:19.871 "num_base_bdevs_operational": 2, 00:28:19.871 "base_bdevs_list": [ 00:28:19.871 { 00:28:19.871 "name": "spare", 00:28:19.871 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:19.871 "is_configured": true, 00:28:19.871 "data_offset": 256, 00:28:19.871 "data_size": 7936 00:28:19.871 }, 00:28:19.871 { 00:28:19.871 "name": "BaseBdev2", 00:28:19.871 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:19.871 "is_configured": true, 00:28:19.871 "data_offset": 256, 00:28:19.871 "data_size": 7936 00:28:19.871 } 00:28:19.871 ] 00:28:19.871 }' 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.871 [2024-11-08 17:15:56.441061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.871 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.872 "name": "raid_bdev1", 00:28:19.872 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:19.872 "strip_size_kb": 0, 00:28:19.872 "state": "online", 00:28:19.872 "raid_level": "raid1", 00:28:19.872 "superblock": true, 00:28:19.872 "num_base_bdevs": 2, 00:28:19.872 "num_base_bdevs_discovered": 1, 00:28:19.872 "num_base_bdevs_operational": 1, 00:28:19.872 "base_bdevs_list": [ 00:28:19.872 { 00:28:19.872 "name": null, 00:28:19.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.872 "is_configured": false, 00:28:19.872 "data_offset": 0, 00:28:19.872 "data_size": 7936 00:28:19.872 }, 00:28:19.872 { 00:28:19.872 "name": "BaseBdev2", 00:28:19.872 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:19.872 "is_configured": true, 00:28:19.872 "data_offset": 256, 00:28:19.872 "data_size": 7936 00:28:19.872 } 00:28:19.872 ] 00:28:19.872 }' 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.872 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.130 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:20.130 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.130 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.130 [2024-11-08 17:15:56.769151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.130 [2024-11-08 17:15:56.769351] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:20.130 [2024-11-08 17:15:56.769365] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:20.130 [2024-11-08 17:15:56.769406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.130 [2024-11-08 17:15:56.779201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:20.130 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.130 17:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:20.130 [2024-11-08 17:15:56.780937] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:21.503 "name": "raid_bdev1", 00:28:21.503 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:21.503 "strip_size_kb": 0, 00:28:21.503 "state": "online", 00:28:21.503 "raid_level": "raid1", 00:28:21.503 "superblock": true, 00:28:21.503 "num_base_bdevs": 2, 00:28:21.503 "num_base_bdevs_discovered": 2, 00:28:21.503 "num_base_bdevs_operational": 2, 00:28:21.503 "process": { 00:28:21.503 "type": "rebuild", 00:28:21.503 "target": "spare", 00:28:21.503 "progress": { 00:28:21.503 "blocks": 2560, 00:28:21.503 "percent": 32 00:28:21.503 } 00:28:21.503 }, 00:28:21.503 "base_bdevs_list": [ 00:28:21.503 { 00:28:21.503 "name": "spare", 00:28:21.503 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:21.503 "is_configured": true, 00:28:21.503 "data_offset": 256, 00:28:21.503 "data_size": 7936 00:28:21.503 }, 00:28:21.503 { 00:28:21.503 "name": "BaseBdev2", 00:28:21.503 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:21.503 "is_configured": true, 00:28:21.503 "data_offset": 256, 00:28:21.503 "data_size": 7936 00:28:21.503 } 00:28:21.503 ] 00:28:21.503 }' 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.503 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.503 [2024-11-08 17:15:57.883292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:21.503 [2024-11-08 17:15:57.887734] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:21.503 [2024-11-08 17:15:57.887912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.504 [2024-11-08 17:15:57.887974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:21.504 [2024-11-08 17:15:57.888001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.504 "name": "raid_bdev1", 00:28:21.504 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:21.504 "strip_size_kb": 0, 00:28:21.504 "state": "online", 00:28:21.504 "raid_level": "raid1", 00:28:21.504 "superblock": true, 00:28:21.504 "num_base_bdevs": 2, 00:28:21.504 "num_base_bdevs_discovered": 1, 00:28:21.504 "num_base_bdevs_operational": 1, 00:28:21.504 "base_bdevs_list": [ 00:28:21.504 { 00:28:21.504 "name": null, 00:28:21.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.504 "is_configured": false, 00:28:21.504 "data_offset": 0, 00:28:21.504 "data_size": 7936 00:28:21.504 }, 00:28:21.504 { 00:28:21.504 "name": "BaseBdev2", 00:28:21.504 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:21.504 "is_configured": true, 00:28:21.504 "data_offset": 256, 00:28:21.504 "data_size": 7936 00:28:21.504 } 00:28:21.504 ] 00:28:21.504 }' 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.504 17:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.761 17:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:21.761 17:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.761 17:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.761 [2024-11-08 17:15:58.223805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:21.761 [2024-11-08 17:15:58.223997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.761 [2024-11-08 17:15:58.224022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:21.761 [2024-11-08 17:15:58.224032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.761 [2024-11-08 17:15:58.224241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.761 [2024-11-08 17:15:58.224255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:21.761 [2024-11-08 17:15:58.224311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:21.761 [2024-11-08 17:15:58.224323] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:21.761 [2024-11-08 17:15:58.224333] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:21.761 [2024-11-08 17:15:58.224357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:21.761 [2024-11-08 17:15:58.234009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:21.761 spare 00:28:21.761 17:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.761 17:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:28:21.761 [2024-11-08 17:15:58.235803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:22.693 "name": "raid_bdev1", 00:28:22.693 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:22.693 "strip_size_kb": 0, 00:28:22.693 "state": "online", 00:28:22.693 "raid_level": "raid1", 00:28:22.693 "superblock": true, 00:28:22.693 "num_base_bdevs": 2, 00:28:22.693 "num_base_bdevs_discovered": 2, 00:28:22.693 "num_base_bdevs_operational": 2, 00:28:22.693 "process": { 00:28:22.693 "type": "rebuild", 00:28:22.693 "target": "spare", 00:28:22.693 "progress": { 00:28:22.693 "blocks": 2560, 00:28:22.693 "percent": 32 00:28:22.693 } 00:28:22.693 }, 00:28:22.693 "base_bdevs_list": [ 00:28:22.693 { 00:28:22.693 "name": "spare", 00:28:22.693 "uuid": "3f8da726-5dc0-53f5-ab7c-c2ab2d419a72", 00:28:22.693 "is_configured": true, 00:28:22.693 "data_offset": 256, 00:28:22.693 "data_size": 7936 00:28:22.693 }, 00:28:22.693 { 00:28:22.693 "name": "BaseBdev2", 00:28:22.693 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:22.693 "is_configured": true, 00:28:22.693 "data_offset": 256, 00:28:22.693 "data_size": 7936 00:28:22.693 } 00:28:22.693 ] 00:28:22.693 }' 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.693 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:22.693 [2024-11-08 17:15:59.350003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.051 [2024-11-08 17:15:59.443099] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:23.051 [2024-11-08 17:15:59.443353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.051 [2024-11-08 17:15:59.443417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.051 [2024-11-08 17:15:59.443439] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.051 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:23.051 "name": "raid_bdev1", 00:28:23.051 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:23.051 "strip_size_kb": 0, 00:28:23.051 "state": "online", 00:28:23.051 "raid_level": "raid1", 00:28:23.051 "superblock": true, 00:28:23.051 "num_base_bdevs": 2, 00:28:23.051 "num_base_bdevs_discovered": 1, 00:28:23.051 "num_base_bdevs_operational": 1, 00:28:23.051 "base_bdevs_list": [ 00:28:23.051 { 00:28:23.051 "name": null, 00:28:23.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.051 "is_configured": false, 00:28:23.051 "data_offset": 0, 00:28:23.051 "data_size": 7936 00:28:23.051 }, 00:28:23.051 { 00:28:23.051 "name": "BaseBdev2", 00:28:23.051 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:23.051 "is_configured": true, 00:28:23.052 "data_offset": 256, 00:28:23.052 "data_size": 7936 00:28:23.052 } 00:28:23.052 ] 00:28:23.052 }' 00:28:23.052 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:23.052 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:23.320 "name": "raid_bdev1", 00:28:23.320 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:23.320 "strip_size_kb": 0, 00:28:23.320 "state": "online", 00:28:23.320 "raid_level": "raid1", 00:28:23.320 "superblock": true, 00:28:23.320 "num_base_bdevs": 2, 00:28:23.320 "num_base_bdevs_discovered": 1, 00:28:23.320 "num_base_bdevs_operational": 1, 00:28:23.320 "base_bdevs_list": [ 00:28:23.320 { 00:28:23.320 "name": null, 00:28:23.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.320 "is_configured": false, 00:28:23.320 "data_offset": 0, 00:28:23.320 "data_size": 7936 00:28:23.320 }, 00:28:23.320 { 00:28:23.320 "name": "BaseBdev2", 00:28:23.320 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:23.320 "is_configured": true, 00:28:23.320 "data_offset": 256, 00:28:23.320 "data_size": 7936 00:28:23.320 } 00:28:23.320 ] 00:28:23.320 }' 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.320 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.321 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:23.321 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.321 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.321 [2024-11-08 17:15:59.935011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:23.321 [2024-11-08 17:15:59.935075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.321 [2024-11-08 17:15:59.935098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:23.321 [2024-11-08 17:15:59.935107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.321 [2024-11-08 17:15:59.935276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.321 [2024-11-08 17:15:59.935286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:23.321 [2024-11-08 17:15:59.935337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:23.321 [2024-11-08 17:15:59.935350] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:23.321 [2024-11-08 17:15:59.935358] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:23.321 [2024-11-08 17:15:59.935368] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:28:23.321 BaseBdev1 00:28:23.321 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.321 17:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.253 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.511 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:24.511 "name": "raid_bdev1", 00:28:24.511 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:24.511 "strip_size_kb": 0, 00:28:24.511 "state": "online", 00:28:24.511 "raid_level": "raid1", 00:28:24.511 "superblock": true, 00:28:24.511 "num_base_bdevs": 2, 00:28:24.511 "num_base_bdevs_discovered": 1, 00:28:24.511 "num_base_bdevs_operational": 1, 00:28:24.511 "base_bdevs_list": [ 00:28:24.511 { 00:28:24.511 "name": null, 00:28:24.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.511 "is_configured": false, 00:28:24.511 "data_offset": 0, 00:28:24.511 "data_size": 7936 00:28:24.511 }, 00:28:24.511 { 00:28:24.511 "name": "BaseBdev2", 00:28:24.511 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:24.511 "is_configured": true, 00:28:24.511 "data_offset": 256, 00:28:24.511 "data_size": 7936 00:28:24.511 } 00:28:24.511 ] 00:28:24.511 }' 00:28:24.511 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:24.511 17:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:24.769 "name": "raid_bdev1", 00:28:24.769 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:24.769 "strip_size_kb": 0, 00:28:24.769 "state": "online", 00:28:24.769 "raid_level": "raid1", 00:28:24.769 "superblock": true, 00:28:24.769 "num_base_bdevs": 2, 00:28:24.769 "num_base_bdevs_discovered": 1, 00:28:24.769 "num_base_bdevs_operational": 1, 00:28:24.769 "base_bdevs_list": [ 00:28:24.769 { 00:28:24.769 "name": null, 00:28:24.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.769 "is_configured": false, 00:28:24.769 "data_offset": 0, 00:28:24.769 "data_size": 7936 00:28:24.769 }, 00:28:24.769 { 00:28:24.769 "name": "BaseBdev2", 00:28:24.769 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:24.769 "is_configured": true, 00:28:24.769 "data_offset": 256, 00:28:24.769 "data_size": 7936 00:28:24.769 } 00:28:24.769 ] 00:28:24.769 }' 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.769 [2024-11-08 17:16:01.431353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:24.769 [2024-11-08 17:16:01.431520] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:24.769 [2024-11-08 17:16:01.431536] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:24.769 request: 00:28:24.769 { 00:28:24.769 "base_bdev": "BaseBdev1", 00:28:24.769 "raid_bdev": "raid_bdev1", 00:28:24.769 "method": "bdev_raid_add_base_bdev", 00:28:24.769 "req_id": 1 00:28:24.769 } 00:28:24.769 Got JSON-RPC error response 00:28:24.769 response: 00:28:24.769 { 00:28:24.769 "code": -22, 00:28:24.769 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:24.769 } 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:24.769 17:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.141 "name": "raid_bdev1", 00:28:26.141 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:26.141 "strip_size_kb": 0, 00:28:26.141 "state": "online", 00:28:26.141 "raid_level": "raid1", 00:28:26.141 "superblock": true, 00:28:26.141 "num_base_bdevs": 2, 00:28:26.141 "num_base_bdevs_discovered": 1, 00:28:26.141 "num_base_bdevs_operational": 1, 00:28:26.141 "base_bdevs_list": [ 00:28:26.141 { 00:28:26.141 "name": null, 00:28:26.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.141 "is_configured": false, 00:28:26.141 "data_offset": 0, 00:28:26.141 "data_size": 7936 00:28:26.141 }, 00:28:26.141 { 00:28:26.141 "name": "BaseBdev2", 00:28:26.141 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:26.141 "is_configured": true, 00:28:26.141 "data_offset": 256, 00:28:26.141 "data_size": 7936 00:28:26.141 } 00:28:26.141 ] 00:28:26.141 }' 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:26.141 "name": "raid_bdev1", 00:28:26.141 "uuid": "a44df013-f969-482b-821a-ad7e25ca1061", 00:28:26.141 "strip_size_kb": 0, 00:28:26.141 "state": "online", 00:28:26.141 "raid_level": "raid1", 00:28:26.141 "superblock": true, 00:28:26.141 "num_base_bdevs": 2, 00:28:26.141 "num_base_bdevs_discovered": 1, 00:28:26.141 "num_base_bdevs_operational": 1, 00:28:26.141 "base_bdevs_list": [ 00:28:26.141 { 00:28:26.141 "name": null, 00:28:26.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.141 "is_configured": false, 00:28:26.141 "data_offset": 0, 00:28:26.141 "data_size": 7936 00:28:26.141 }, 00:28:26.141 { 00:28:26.141 "name": "BaseBdev2", 00:28:26.141 "uuid": "c6eb43e2-96bc-5ac1-ac16-1f3db0e07ded", 00:28:26.141 "is_configured": true, 00:28:26.141 "data_offset": 256, 00:28:26.141 "data_size": 7936 00:28:26.141 } 00:28:26.141 ] 00:28:26.141 }' 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:26.141 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 87144 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 87144 ']' 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 87144 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87144 00:28:26.399 killing process with pid 87144 00:28:26.399 Received shutdown signal, test time was about 60.000000 seconds 00:28:26.399 00:28:26.399 Latency(us) 00:28:26.399 [2024-11-08T17:16:03.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.399 [2024-11-08T17:16:03.114Z] =================================================================================================================== 00:28:26.399 [2024-11-08T17:16:03.114Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87144' 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 87144 00:28:26.399 [2024-11-08 17:16:02.879694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:26.399 17:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 87144 00:28:26.399 [2024-11-08 17:16:02.879838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:26.399 [2024-11-08 17:16:02.879885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:26.399 [2024-11-08 17:16:02.879895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:26.399 [2024-11-08 17:16:03.040249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:26.969 17:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:28:26.969 00:28:26.969 real 0m15.166s 00:28:26.969 user 0m19.257s 00:28:26.969 sys 0m1.193s 00:28:26.969 17:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:26.969 ************************************ 00:28:26.969 END TEST raid_rebuild_test_sb_md_interleaved 00:28:26.969 ************************************ 00:28:26.969 17:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:27.227 17:16:03 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:28:27.227 17:16:03 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:28:27.227 17:16:03 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 87144 ']' 00:28:27.227 17:16:03 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 87144 00:28:27.227 17:16:03 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:28:27.227 00:28:27.227 real 10m20.731s 00:28:27.227 user 13m23.518s 00:28:27.227 sys 1m35.802s 00:28:27.227 ************************************ 00:28:27.227 END TEST bdev_raid 00:28:27.227 ************************************ 00:28:27.227 17:16:03 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:27.227 17:16:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:27.227 17:16:03 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:27.227 17:16:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:27.227 17:16:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:27.227 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:28:27.227 ************************************ 00:28:27.227 START TEST spdkcli_raid 00:28:27.227 ************************************ 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:27.227 * Looking for test storage... 00:28:27.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.227 17:16:03 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:27.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.227 --rc genhtml_branch_coverage=1 00:28:27.227 --rc genhtml_function_coverage=1 00:28:27.227 --rc genhtml_legend=1 00:28:27.227 --rc geninfo_all_blocks=1 00:28:27.227 --rc geninfo_unexecuted_blocks=1 00:28:27.227 00:28:27.227 ' 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:27.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.227 --rc genhtml_branch_coverage=1 00:28:27.227 --rc genhtml_function_coverage=1 00:28:27.227 --rc genhtml_legend=1 00:28:27.227 --rc geninfo_all_blocks=1 00:28:27.227 --rc geninfo_unexecuted_blocks=1 00:28:27.227 00:28:27.227 ' 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:27.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.227 --rc genhtml_branch_coverage=1 00:28:27.227 --rc genhtml_function_coverage=1 00:28:27.227 --rc genhtml_legend=1 00:28:27.227 --rc geninfo_all_blocks=1 00:28:27.227 --rc geninfo_unexecuted_blocks=1 00:28:27.227 00:28:27.227 ' 00:28:27.227 17:16:03 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:27.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.227 --rc genhtml_branch_coverage=1 00:28:27.227 --rc genhtml_function_coverage=1 00:28:27.227 --rc genhtml_legend=1 00:28:27.227 --rc geninfo_all_blocks=1 00:28:27.227 --rc geninfo_unexecuted_blocks=1 00:28:27.227 00:28:27.227 ' 00:28:27.227 17:16:03 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:27.227 17:16:03 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:27.227 17:16:03 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:27.227 17:16:03 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:27.227 17:16:03 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:28:27.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=87807 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 87807 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 87807 ']' 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:27.228 17:16:03 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:28:27.228 17:16:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:27.485 [2024-11-08 17:16:03.986113] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:27.485 [2024-11-08 17:16:03.986256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87807 ] 00:28:27.485 [2024-11-08 17:16:04.145434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:27.742 [2024-11-08 17:16:04.249204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.742 [2024-11-08 17:16:04.249205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.307 17:16:04 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:28.307 17:16:04 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:28:28.307 17:16:04 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:28:28.307 17:16:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:28.307 17:16:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:28.307 17:16:04 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:28:28.307 17:16:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:28.307 17:16:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:28.307 17:16:04 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:28.307 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:28.307 ' 00:28:29.720 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:28:29.720 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:28:29.720 17:16:06 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:28:29.720 17:16:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.720 17:16:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:29.977 17:16:06 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:28:29.977 17:16:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.977 17:16:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:29.977 17:16:06 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:28:29.977 ' 00:28:30.910 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:28:30.910 17:16:07 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:28:30.910 17:16:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.910 17:16:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:30.910 17:16:07 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:28:30.910 17:16:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.910 17:16:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:30.910 17:16:07 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:28:30.910 17:16:07 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:28:31.475 17:16:08 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:28:31.733 17:16:08 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:28:31.733 17:16:08 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:28:31.733 17:16:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.733 17:16:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:31.733 17:16:08 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:28:31.733 17:16:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:31.733 17:16:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:31.733 17:16:08 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:28:31.733 ' 00:28:32.664 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:28:32.664 17:16:09 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:28:32.664 17:16:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.664 17:16:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:32.664 17:16:09 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:28:32.664 17:16:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:32.664 17:16:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:32.664 17:16:09 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:28:32.664 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:28:32.664 ' 00:28:34.046 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:28:34.046 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:28:34.046 17:16:10 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:28:34.046 17:16:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.046 17:16:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:34.303 17:16:10 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 87807 00:28:34.303 17:16:10 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 87807 ']' 00:28:34.303 17:16:10 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 87807 00:28:34.303 17:16:10 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:28:34.303 17:16:10 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:34.303 17:16:10 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87807 00:28:34.303 killing process with pid 87807 00:28:34.304 17:16:10 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:34.304 17:16:10 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:34.304 17:16:10 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87807' 00:28:34.304 17:16:10 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 87807 00:28:34.304 17:16:10 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 87807 00:28:35.674 17:16:12 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:28:35.674 17:16:12 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 87807 ']' 00:28:35.674 17:16:12 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 87807 00:28:35.674 17:16:12 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 87807 ']' 00:28:35.674 17:16:12 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 87807 00:28:35.674 Process with pid 87807 is not found 00:28:35.674 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (87807) - No such process 00:28:35.674 17:16:12 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 87807 is not found' 00:28:35.674 17:16:12 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:28:35.674 17:16:12 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:35.674 17:16:12 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:35.674 17:16:12 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:35.674 ************************************ 00:28:35.674 END TEST spdkcli_raid 00:28:35.674 ************************************ 00:28:35.674 00:28:35.674 real 0m8.377s 00:28:35.674 user 0m17.392s 00:28:35.674 sys 0m0.854s 00:28:35.674 17:16:12 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:35.674 17:16:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:35.674 17:16:12 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:35.674 17:16:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:35.674 17:16:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:35.674 17:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:35.674 ************************************ 00:28:35.674 START TEST blockdev_raid5f 00:28:35.674 ************************************ 00:28:35.674 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:35.674 * Looking for test storage... 00:28:35.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:35.674 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:35.674 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:28:35.674 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:35.674 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:28:35.674 17:16:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:35.675 17:16:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.675 --rc genhtml_branch_coverage=1 00:28:35.675 --rc genhtml_function_coverage=1 00:28:35.675 --rc genhtml_legend=1 00:28:35.675 --rc geninfo_all_blocks=1 00:28:35.675 --rc geninfo_unexecuted_blocks=1 00:28:35.675 00:28:35.675 ' 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.675 --rc genhtml_branch_coverage=1 00:28:35.675 --rc genhtml_function_coverage=1 00:28:35.675 --rc genhtml_legend=1 00:28:35.675 --rc geninfo_all_blocks=1 00:28:35.675 --rc geninfo_unexecuted_blocks=1 00:28:35.675 00:28:35.675 ' 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.675 --rc genhtml_branch_coverage=1 00:28:35.675 --rc genhtml_function_coverage=1 00:28:35.675 --rc genhtml_legend=1 00:28:35.675 --rc geninfo_all_blocks=1 00:28:35.675 --rc geninfo_unexecuted_blocks=1 00:28:35.675 00:28:35.675 ' 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.675 --rc genhtml_branch_coverage=1 00:28:35.675 --rc genhtml_function_coverage=1 00:28:35.675 --rc genhtml_legend=1 00:28:35.675 --rc geninfo_all_blocks=1 00:28:35.675 --rc geninfo_unexecuted_blocks=1 00:28:35.675 00:28:35.675 ' 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:28:35.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=88070 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 88070 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 88070 ']' 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:35.675 17:16:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:35.675 17:16:12 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:35.675 [2024-11-08 17:16:12.367058] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:35.675 [2024-11-08 17:16:12.367364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88070 ] 00:28:35.933 [2024-11-08 17:16:12.524814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.933 [2024-11-08 17:16:12.626856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:36.867 Malloc0 00:28:36.867 Malloc1 00:28:36.867 Malloc2 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "0ebc8b0f-962c-4681-a7b7-f5594df98f36"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0ebc8b0f-962c-4681-a7b7-f5594df98f36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "0ebc8b0f-962c-4681-a7b7-f5594df98f36",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "dcd7be86-a1b4-461a-8869-14674f8b34d9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "187a381c-6fbe-4d94-a798-46a12719b097",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "34327d1e-7f13-4fdd-bcbd-a719613b51cb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:28:36.867 17:16:13 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 88070 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 88070 ']' 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 88070 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88070 00:28:36.867 killing process with pid 88070 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88070' 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 88070 00:28:36.867 17:16:13 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 88070 00:28:38.770 17:16:15 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:38.770 17:16:15 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:38.770 17:16:15 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:28:38.770 17:16:15 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:38.770 17:16:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:38.770 ************************************ 00:28:38.770 START TEST bdev_hello_world 00:28:38.770 ************************************ 00:28:38.770 17:16:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:38.770 [2024-11-08 17:16:15.334101] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:38.770 [2024-11-08 17:16:15.334204] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88126 ] 00:28:39.028 [2024-11-08 17:16:15.491651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.028 [2024-11-08 17:16:15.609587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.595 [2024-11-08 17:16:16.022870] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:39.595 [2024-11-08 17:16:16.022931] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:28:39.595 [2024-11-08 17:16:16.022947] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:39.595 [2024-11-08 17:16:16.023422] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:39.595 [2024-11-08 17:16:16.023544] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:39.595 [2024-11-08 17:16:16.023563] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:39.595 [2024-11-08 17:16:16.023619] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:39.595 00:28:39.595 [2024-11-08 17:16:16.023635] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:40.560 00:28:40.560 real 0m1.664s 00:28:40.560 user 0m1.335s 00:28:40.560 sys 0m0.208s 00:28:40.561 17:16:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:40.561 ************************************ 00:28:40.561 END TEST bdev_hello_world 00:28:40.561 ************************************ 00:28:40.561 17:16:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:40.561 17:16:16 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:28:40.561 17:16:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:40.561 17:16:16 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:40.561 17:16:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:40.561 ************************************ 00:28:40.561 START TEST bdev_bounds 00:28:40.561 ************************************ 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:28:40.561 Process bdevio pid: 88157 00:28:40.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=88157 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 88157' 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 88157 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 88157 ']' 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:40.561 17:16:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:40.561 [2024-11-08 17:16:17.052084] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:40.561 [2024-11-08 17:16:17.052205] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88157 ] 00:28:40.561 [2024-11-08 17:16:17.210609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:40.818 [2024-11-08 17:16:17.331868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.818 [2024-11-08 17:16:17.331951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.818 [2024-11-08 17:16:17.331952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.384 17:16:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:41.384 17:16:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:28:41.384 17:16:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:41.384 I/O targets: 00:28:41.384 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:28:41.384 00:28:41.384 00:28:41.384 CUnit - A unit testing framework for C - Version 2.1-3 00:28:41.384 http://cunit.sourceforge.net/ 00:28:41.384 00:28:41.384 00:28:41.384 Suite: bdevio tests on: raid5f 00:28:41.384 Test: blockdev write read block ...passed 00:28:41.384 Test: blockdev write zeroes read block ...passed 00:28:41.384 Test: blockdev write zeroes read no split ...passed 00:28:41.384 Test: blockdev write zeroes read split ...passed 00:28:41.642 Test: blockdev write zeroes read split partial ...passed 00:28:41.642 Test: blockdev reset ...passed 00:28:41.642 Test: blockdev write read 8 blocks ...passed 00:28:41.642 Test: blockdev write read size > 128k ...passed 00:28:41.642 Test: blockdev write read invalid size ...passed 00:28:41.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:41.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:41.642 Test: blockdev write read max offset ...passed 00:28:41.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:41.642 Test: blockdev writev readv 8 blocks ...passed 00:28:41.642 Test: blockdev writev readv 30 x 1block ...passed 00:28:41.642 Test: blockdev writev readv block ...passed 00:28:41.642 Test: blockdev writev readv size > 128k ...passed 00:28:41.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:41.642 Test: blockdev comparev and writev ...passed 00:28:41.642 Test: blockdev nvme passthru rw ...passed 00:28:41.642 Test: blockdev nvme passthru vendor specific ...passed 00:28:41.642 Test: blockdev nvme admin passthru ...passed 00:28:41.642 Test: blockdev copy ...passed 00:28:41.642 00:28:41.642 Run Summary: Type Total Ran Passed Failed Inactive 00:28:41.642 suites 1 1 n/a 0 0 00:28:41.642 tests 23 23 23 0 0 00:28:41.642 asserts 130 130 130 0 n/a 00:28:41.642 00:28:41.642 Elapsed time = 0.432 seconds 00:28:41.642 0 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 88157 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 88157 ']' 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 88157 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88157 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88157' 00:28:41.642 killing process with pid 88157 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 88157 00:28:41.642 17:16:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 88157 00:28:42.575 17:16:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:42.575 ************************************ 00:28:42.575 END TEST bdev_bounds 00:28:42.575 ************************************ 00:28:42.575 00:28:42.575 real 0m2.166s 00:28:42.575 user 0m5.322s 00:28:42.575 sys 0m0.332s 00:28:42.575 17:16:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.575 17:16:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:42.575 17:16:19 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:42.575 17:16:19 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:42.575 17:16:19 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:42.575 17:16:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:42.575 ************************************ 00:28:42.575 START TEST bdev_nbd 00:28:42.575 ************************************ 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:42.575 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:42.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=88217 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 88217 /var/tmp/spdk-nbd.sock 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 88217 ']' 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:42.576 17:16:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:42.576 [2024-11-08 17:16:19.274512] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:42.576 [2024-11-08 17:16:19.274653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.833 [2024-11-08 17:16:19.448705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.089 [2024-11-08 17:16:19.573126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:43.653 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:43.654 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:43.912 1+0 records in 00:28:43.912 1+0 records out 00:28:43.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351477 s, 11.7 MB/s 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:43.912 { 00:28:43.912 "nbd_device": "/dev/nbd0", 00:28:43.912 "bdev_name": "raid5f" 00:28:43.912 } 00:28:43.912 ]' 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:43.912 { 00:28:43.912 "nbd_device": "/dev/nbd0", 00:28:43.912 "bdev_name": "raid5f" 00:28:43.912 } 00:28:43.912 ]' 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:43.912 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:44.170 17:16:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:28:44.428 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:44.429 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:44.429 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:44.429 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:44.429 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:44.429 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:44.429 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:28:44.707 /dev/nbd0 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:44.707 1+0 records in 00:28:44.707 1+0 records out 00:28:44.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326203 s, 12.6 MB/s 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:44.707 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:44.978 { 00:28:44.978 "nbd_device": "/dev/nbd0", 00:28:44.978 "bdev_name": "raid5f" 00:28:44.978 } 00:28:44.978 ]' 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:44.978 { 00:28:44.978 "nbd_device": "/dev/nbd0", 00:28:44.978 "bdev_name": "raid5f" 00:28:44.978 } 00:28:44.978 ]' 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:44.978 256+0 records in 00:28:44.978 256+0 records out 00:28:44.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00745974 s, 141 MB/s 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:44.978 256+0 records in 00:28:44.978 256+0 records out 00:28:44.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284304 s, 36.9 MB/s 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:44.978 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.979 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:45.236 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:45.237 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:45.237 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:45.237 17:16:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:28:45.495 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:45.753 malloc_lvol_verify 00:28:45.753 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:46.012 a62e95b5-b72e-4fb0-8b4f-fa66d20e30ca 00:28:46.012 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:46.270 d0087590-d88a-4092-91da-f064807f4180 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:46.270 /dev/nbd0 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:28:46.270 mke2fs 1.47.0 (5-Feb-2023) 00:28:46.270 Discarding device blocks: 0/4096 done 00:28:46.270 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:46.270 00:28:46.270 Allocating group tables: 0/1 done 00:28:46.270 Writing inode tables: 0/1 done 00:28:46.270 Creating journal (1024 blocks): done 00:28:46.270 Writing superblocks and filesystem accounting information: 0/1 done 00:28:46.270 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:46.270 17:16:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 88217 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 88217 ']' 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 88217 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88217 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:46.528 killing process with pid 88217 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88217' 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 88217 00:28:46.528 17:16:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 88217 00:28:47.462 17:16:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:28:47.462 00:28:47.462 real 0m4.810s 00:28:47.462 user 0m6.930s 00:28:47.462 sys 0m1.046s 00:28:47.462 17:16:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:47.462 17:16:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:47.462 ************************************ 00:28:47.462 END TEST bdev_nbd 00:28:47.462 ************************************ 00:28:47.462 17:16:24 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:28:47.462 17:16:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:28:47.462 17:16:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:28:47.463 17:16:24 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:28:47.463 17:16:24 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:47.463 17:16:24 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:47.463 17:16:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:47.463 ************************************ 00:28:47.463 START TEST bdev_fio 00:28:47.463 ************************************ 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:28:47.463 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:47.463 ************************************ 00:28:47.463 START TEST bdev_fio_rw_verify 00:28:47.463 ************************************ 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:47.463 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:28:47.721 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:47.721 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:47.721 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:28:47.721 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:47.721 17:16:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:47.721 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:47.721 fio-3.35 00:28:47.721 Starting 1 thread 00:28:59.911 00:28:59.911 job_raid5f: (groupid=0, jobs=1): err= 0: pid=88406: Fri Nov 8 17:16:35 2024 00:28:59.911 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(456MiB/10001msec) 00:28:59.911 slat (nsec): min=17652, max=53783, avg=20757.56, stdev=2578.15 00:28:59.911 clat (usec): min=9, max=349, avg=139.01, stdev=51.07 00:28:59.911 lat (usec): min=28, max=373, avg=159.77, stdev=51.75 00:28:59.911 clat percentiles (usec): 00:28:59.911 | 50.000th=[ 139], 99.000th=[ 249], 99.900th=[ 269], 99.990th=[ 310], 00:28:59.911 | 99.999th=[ 351] 00:28:59.911 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(472MiB/9867msec); 0 zone resets 00:28:59.911 slat (usec): min=7, max=324, avg=17.38, stdev= 3.23 00:28:59.911 clat (usec): min=56, max=1370, avg=312.49, stdev=49.38 00:28:59.911 lat (usec): min=72, max=1548, avg=329.87, stdev=50.82 00:28:59.911 clat percentiles (usec): 00:28:59.911 | 50.000th=[ 314], 99.000th=[ 420], 99.900th=[ 461], 99.990th=[ 947], 00:28:59.911 | 99.999th=[ 1303] 00:28:59.911 bw ( KiB/s): min=41336, max=53872, per=98.33%, avg=48194.11, stdev=4904.87, samples=19 00:28:59.911 iops : min=10334, max=13468, avg=12048.53, stdev=1226.22, samples=19 00:28:59.911 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=14.26%, 250=39.59% 00:28:59.911 lat (usec) : 500=46.11%, 750=0.02%, 1000=0.01% 00:28:59.911 lat (msec) : 2=0.01% 00:28:59.911 cpu : usr=99.20%, sys=0.23%, ctx=23, majf=0, minf=9636 00:28:59.911 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:59.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.911 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.911 issued rwts: total=116798,120902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:59.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:59.911 00:28:59.911 Run status group 0 (all jobs): 00:28:59.911 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=456MiB (478MB), run=10001-10001msec 00:28:59.911 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=472MiB (495MB), run=9867-9867msec 00:28:59.911 ----------------------------------------------------- 00:28:59.911 Suppressions used: 00:28:59.911 count bytes template 00:28:59.911 1 7 /usr/src/fio/parse.c 00:28:59.911 478 45888 /usr/src/fio/iolog.c 00:28:59.911 1 8 libtcmalloc_minimal.so 00:28:59.911 1 904 libcrypto.so 00:28:59.911 ----------------------------------------------------- 00:28:59.911 00:28:59.911 00:28:59.911 real 0m12.081s 00:28:59.911 user 0m12.424s 00:28:59.911 sys 0m0.379s 00:28:59.911 17:16:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:59.911 17:16:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:59.911 ************************************ 00:28:59.911 END TEST bdev_fio_rw_verify 00:28:59.912 ************************************ 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "0ebc8b0f-962c-4681-a7b7-f5594df98f36"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0ebc8b0f-962c-4681-a7b7-f5594df98f36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "0ebc8b0f-962c-4681-a7b7-f5594df98f36",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "dcd7be86-a1b4-461a-8869-14674f8b34d9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "187a381c-6fbe-4d94-a798-46a12719b097",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "34327d1e-7f13-4fdd-bcbd-a719613b51cb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:28:59.912 /home/vagrant/spdk_repo/spdk 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:28:59.912 00:28:59.912 real 0m12.254s 00:28:59.912 user 0m12.491s 00:28:59.912 sys 0m0.451s 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:59.912 17:16:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:59.912 ************************************ 00:28:59.912 END TEST bdev_fio 00:28:59.912 ************************************ 00:28:59.912 17:16:36 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:59.912 17:16:36 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:59.912 17:16:36 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:28:59.912 17:16:36 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:59.912 17:16:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:59.912 ************************************ 00:28:59.912 START TEST bdev_verify 00:28:59.912 ************************************ 00:28:59.912 17:16:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:59.912 [2024-11-08 17:16:36.405068] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:28:59.912 [2024-11-08 17:16:36.405188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88565 ] 00:28:59.912 [2024-11-08 17:16:36.562706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:00.170 [2024-11-08 17:16:36.682054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.170 [2024-11-08 17:16:36.682376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.428 Running I/O for 5 seconds... 00:29:02.731 15921.00 IOPS, 62.19 MiB/s [2024-11-08T17:16:40.379Z] 15418.50 IOPS, 60.23 MiB/s [2024-11-08T17:16:41.313Z] 14990.00 IOPS, 58.55 MiB/s [2024-11-08T17:16:42.247Z] 16723.00 IOPS, 65.32 MiB/s [2024-11-08T17:16:42.247Z] 17446.40 IOPS, 68.15 MiB/s 00:29:05.532 Latency(us) 00:29:05.532 [2024-11-08T17:16:42.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.532 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:05.532 Verification LBA range: start 0x0 length 0x2000 00:29:05.532 raid5f : 5.01 8677.98 33.90 0.00 0.00 22032.02 171.72 23693.78 00:29:05.532 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:05.532 Verification LBA range: start 0x2000 length 0x2000 00:29:05.532 raid5f : 5.01 8768.80 34.25 0.00 0.00 21808.04 183.53 23693.78 00:29:05.532 [2024-11-08T17:16:42.247Z] =================================================================================================================== 00:29:05.532 [2024-11-08T17:16:42.247Z] Total : 17446.78 68.15 0.00 0.00 21919.45 171.72 23693.78 00:29:06.467 00:29:06.467 real 0m6.538s 00:29:06.467 user 0m12.176s 00:29:06.467 sys 0m0.213s 00:29:06.467 17:16:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:06.467 17:16:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:06.467 ************************************ 00:29:06.467 END TEST bdev_verify 00:29:06.467 ************************************ 00:29:06.467 17:16:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:06.467 17:16:42 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:29:06.468 17:16:42 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:06.468 17:16:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:06.468 ************************************ 00:29:06.468 START TEST bdev_verify_big_io 00:29:06.468 ************************************ 00:29:06.468 17:16:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:06.468 [2024-11-08 17:16:42.986119] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:29:06.468 [2024-11-08 17:16:42.986239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88658 ] 00:29:06.468 [2024-11-08 17:16:43.145886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:06.725 [2024-11-08 17:16:43.264174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.725 [2024-11-08 17:16:43.264453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.983 Running I/O for 5 seconds... 00:29:09.285 887.00 IOPS, 55.44 MiB/s [2024-11-08T17:16:46.936Z] 1015.00 IOPS, 63.44 MiB/s [2024-11-08T17:16:47.908Z] 1036.00 IOPS, 64.75 MiB/s [2024-11-08T17:16:48.842Z] 1079.00 IOPS, 67.44 MiB/s [2024-11-08T17:16:49.099Z] 1103.80 IOPS, 68.99 MiB/s 00:29:12.384 Latency(us) 00:29:12.384 [2024-11-08T17:16:49.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.385 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:12.385 Verification LBA range: start 0x0 length 0x200 00:29:12.385 raid5f : 5.24 532.42 33.28 0.00 0.00 5906848.82 138.63 293601.28 00:29:12.385 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:12.385 Verification LBA range: start 0x200 length 0x200 00:29:12.385 raid5f : 5.22 583.80 36.49 0.00 0.00 5335286.10 158.33 259724.21 00:29:12.385 [2024-11-08T17:16:49.100Z] =================================================================================================================== 00:29:12.385 [2024-11-08T17:16:49.100Z] Total : 1116.22 69.76 0.00 0.00 5608633.67 138.63 293601.28 00:29:13.317 00:29:13.317 real 0m6.771s 00:29:13.317 user 0m12.632s 00:29:13.317 sys 0m0.226s 00:29:13.317 17:16:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:13.317 17:16:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.317 ************************************ 00:29:13.317 END TEST bdev_verify_big_io 00:29:13.317 ************************************ 00:29:13.317 17:16:49 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:13.317 17:16:49 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:13.317 17:16:49 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:13.317 17:16:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:13.317 ************************************ 00:29:13.317 START TEST bdev_write_zeroes 00:29:13.317 ************************************ 00:29:13.317 17:16:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:13.317 [2024-11-08 17:16:49.801192] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:29:13.317 [2024-11-08 17:16:49.801326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88752 ] 00:29:13.317 [2024-11-08 17:16:49.954730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.574 [2024-11-08 17:16:50.056861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.831 Running I/O for 1 seconds... 00:29:14.764 28455.00 IOPS, 111.15 MiB/s 00:29:14.764 Latency(us) 00:29:14.764 [2024-11-08T17:16:51.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.764 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:14.764 raid5f : 1.01 28421.06 111.02 0.00 0.00 4490.47 1329.62 6074.68 00:29:14.764 [2024-11-08T17:16:51.479Z] =================================================================================================================== 00:29:14.764 [2024-11-08T17:16:51.479Z] Total : 28421.06 111.02 0.00 0.00 4490.47 1329.62 6074.68 00:29:15.697 00:29:15.697 real 0m2.458s 00:29:15.697 user 0m2.128s 00:29:15.697 sys 0m0.206s 00:29:15.697 17:16:52 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:15.697 17:16:52 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:15.697 ************************************ 00:29:15.697 END TEST bdev_write_zeroes 00:29:15.697 ************************************ 00:29:15.697 17:16:52 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:15.697 17:16:52 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:15.697 17:16:52 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:15.697 17:16:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:15.697 ************************************ 00:29:15.697 START TEST bdev_json_nonenclosed 00:29:15.697 ************************************ 00:29:15.697 17:16:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:15.697 [2024-11-08 17:16:52.283684] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:29:15.697 [2024-11-08 17:16:52.283807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88794 ] 00:29:15.955 [2024-11-08 17:16:52.434487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.955 [2024-11-08 17:16:52.536414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.955 [2024-11-08 17:16:52.536513] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:15.955 [2024-11-08 17:16:52.536534] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:15.955 [2024-11-08 17:16:52.536544] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:16.212 00:29:16.212 real 0m0.471s 00:29:16.212 user 0m0.291s 00:29:16.212 sys 0m0.076s 00:29:16.212 17:16:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.212 17:16:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:16.212 ************************************ 00:29:16.212 END TEST bdev_json_nonenclosed 00:29:16.212 ************************************ 00:29:16.212 17:16:52 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:16.212 17:16:52 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:16.212 17:16:52 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:16.212 17:16:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:16.212 ************************************ 00:29:16.212 START TEST bdev_json_nonarray 00:29:16.212 ************************************ 00:29:16.212 17:16:52 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:16.212 [2024-11-08 17:16:52.804464] Starting SPDK v25.01-pre git sha1 5b0ad6d60 / DPDK 24.03.0 initialization... 00:29:16.212 [2024-11-08 17:16:52.804598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88814 ] 00:29:16.470 [2024-11-08 17:16:52.962808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.470 [2024-11-08 17:16:53.064381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.470 [2024-11-08 17:16:53.064479] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:16.470 [2024-11-08 17:16:53.064496] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:16.470 [2024-11-08 17:16:53.064510] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:16.729 00:29:16.729 real 0m0.493s 00:29:16.729 user 0m0.284s 00:29:16.729 sys 0m0.104s 00:29:16.729 17:16:53 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.729 ************************************ 00:29:16.729 END TEST bdev_json_nonarray 00:29:16.729 ************************************ 00:29:16.729 17:16:53 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:29:16.729 17:16:53 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:29:16.729 00:29:16.729 real 0m41.113s 00:29:16.729 user 0m56.671s 00:29:16.729 sys 0m3.625s 00:29:16.729 17:16:53 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.729 17:16:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:16.729 ************************************ 00:29:16.729 END TEST blockdev_raid5f 00:29:16.729 ************************************ 00:29:16.729 17:16:53 -- spdk/autotest.sh@194 -- # uname -s 00:29:16.729 17:16:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:29:16.729 17:16:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:16.729 17:16:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:16.729 17:16:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@256 -- # timing_exit lib 00:29:16.729 17:16:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:16.729 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:29:16.729 17:16:53 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:16.729 17:16:53 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:16.729 17:16:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:16.729 17:16:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:16.729 17:16:53 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:16.729 17:16:53 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:16.729 17:16:53 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:16.729 17:16:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.729 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:29:16.729 17:16:53 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:16.729 17:16:53 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:16.729 17:16:53 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:16.729 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:29:18.101 INFO: APP EXITING 00:29:18.101 INFO: killing all VMs 00:29:18.101 INFO: killing vhost app 00:29:18.101 INFO: EXIT DONE 00:29:18.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:18.101 Waiting for block devices as requested 00:29:18.101 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:18.358 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:18.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:18.924 Cleaning 00:29:18.924 Removing: /var/run/dpdk/spdk0/config 00:29:18.924 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:18.924 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:18.925 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:18.925 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:18.925 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:18.925 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:18.925 Removing: /dev/shm/spdk_tgt_trace.pid56251 00:29:18.925 Removing: /var/run/dpdk/spdk0 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56038 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56251 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56464 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56562 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56607 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56730 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56748 00:29:18.925 Removing: /var/run/dpdk/spdk_pid56949 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57042 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57138 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57249 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57346 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57390 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57422 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57498 00:29:18.925 Removing: /var/run/dpdk/spdk_pid57598 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58040 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58104 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58167 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58183 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58314 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58331 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58468 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58484 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58543 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58566 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58619 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58637 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58808 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58839 00:29:18.925 Removing: /var/run/dpdk/spdk_pid58928 00:29:18.925 Removing: /var/run/dpdk/spdk_pid60211 00:29:18.925 Removing: /var/run/dpdk/spdk_pid60417 00:29:18.925 Removing: /var/run/dpdk/spdk_pid60546 00:29:18.925 Removing: /var/run/dpdk/spdk_pid61162 00:29:18.925 Removing: /var/run/dpdk/spdk_pid61362 00:29:18.925 Removing: /var/run/dpdk/spdk_pid61497 00:29:18.925 Removing: /var/run/dpdk/spdk_pid62118 00:29:18.925 Removing: /var/run/dpdk/spdk_pid62437 00:29:18.925 Removing: /var/run/dpdk/spdk_pid62577 00:29:18.925 Removing: /var/run/dpdk/spdk_pid63907 00:29:18.925 Removing: /var/run/dpdk/spdk_pid64149 00:29:18.925 Removing: /var/run/dpdk/spdk_pid64289 00:29:18.925 Removing: /var/run/dpdk/spdk_pid65630 00:29:18.925 Removing: /var/run/dpdk/spdk_pid65872 00:29:18.925 Removing: /var/run/dpdk/spdk_pid66012 00:29:18.925 Removing: /var/run/dpdk/spdk_pid67336 00:29:18.925 Removing: /var/run/dpdk/spdk_pid67760 00:29:18.925 Removing: /var/run/dpdk/spdk_pid67900 00:29:18.925 Removing: /var/run/dpdk/spdk_pid69319 00:29:18.925 Removing: /var/run/dpdk/spdk_pid69567 00:29:18.925 Removing: /var/run/dpdk/spdk_pid69708 00:29:18.925 Removing: /var/run/dpdk/spdk_pid71133 00:29:18.925 Removing: /var/run/dpdk/spdk_pid71381 00:29:18.925 Removing: /var/run/dpdk/spdk_pid71521 00:29:18.925 Removing: /var/run/dpdk/spdk_pid72940 00:29:18.925 Removing: /var/run/dpdk/spdk_pid73411 00:29:18.925 Removing: /var/run/dpdk/spdk_pid73551 00:29:18.925 Removing: /var/run/dpdk/spdk_pid73689 00:29:18.925 Removing: /var/run/dpdk/spdk_pid74185 00:29:18.925 Removing: /var/run/dpdk/spdk_pid74932 00:29:18.925 Removing: /var/run/dpdk/spdk_pid75319 00:29:18.925 Removing: /var/run/dpdk/spdk_pid75999 00:29:18.925 Removing: /var/run/dpdk/spdk_pid76495 00:29:18.925 Removing: /var/run/dpdk/spdk_pid77259 00:29:18.925 Removing: /var/run/dpdk/spdk_pid77657 00:29:19.183 Removing: /var/run/dpdk/spdk_pid79549 00:29:19.183 Removing: /var/run/dpdk/spdk_pid79976 00:29:19.183 Removing: /var/run/dpdk/spdk_pid80398 00:29:19.183 Removing: /var/run/dpdk/spdk_pid82406 00:29:19.183 Removing: /var/run/dpdk/spdk_pid82870 00:29:19.183 Removing: /var/run/dpdk/spdk_pid83381 00:29:19.183 Removing: /var/run/dpdk/spdk_pid84418 00:29:19.183 Removing: /var/run/dpdk/spdk_pid84724 00:29:19.183 Removing: /var/run/dpdk/spdk_pid85629 00:29:19.183 Removing: /var/run/dpdk/spdk_pid85935 00:29:19.183 Removing: /var/run/dpdk/spdk_pid86836 00:29:19.183 Removing: /var/run/dpdk/spdk_pid87144 00:29:19.183 Removing: /var/run/dpdk/spdk_pid87807 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88070 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88126 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88157 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88392 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88565 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88658 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88752 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88794 00:29:19.183 Removing: /var/run/dpdk/spdk_pid88814 00:29:19.183 Clean 00:29:19.183 17:16:55 -- common/autotest_common.sh@1451 -- # return 0 00:29:19.183 17:16:55 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:19.183 17:16:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.183 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:29:19.183 17:16:55 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:19.183 17:16:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.183 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:29:19.183 17:16:55 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:19.183 17:16:55 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:19.183 17:16:55 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:19.183 17:16:55 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:19.183 17:16:55 -- spdk/autotest.sh@394 -- # hostname 00:29:19.183 17:16:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:19.441 geninfo: WARNING: invalid characters removed from testname! 00:29:41.365 17:17:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:43.933 17:17:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:46.461 17:17:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:48.990 17:17:25 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:51.515 17:17:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:54.041 17:17:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.657 17:17:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:56.657 17:17:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:56.657 17:17:33 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:56.657 17:17:33 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:56.657 17:17:33 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:56.657 17:17:33 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:56.657 + [[ -n 4993 ]] 00:29:56.657 + sudo kill 4993 00:29:56.697 [Pipeline] } 00:29:56.713 [Pipeline] // timeout 00:29:56.719 [Pipeline] } 00:29:56.735 [Pipeline] // stage 00:29:56.741 [Pipeline] } 00:29:56.757 [Pipeline] // catchError 00:29:56.766 [Pipeline] stage 00:29:56.769 [Pipeline] { (Stop VM) 00:29:56.782 [Pipeline] sh 00:29:57.059 + vagrant halt 00:29:59.585 ==> default: Halting domain... 00:30:06.190 [Pipeline] sh 00:30:06.471 + vagrant destroy -f 00:30:09.772 ==> default: Removing domain... 00:30:09.785 [Pipeline] sh 00:30:10.073 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:30:10.084 [Pipeline] } 00:30:10.099 [Pipeline] // stage 00:30:10.106 [Pipeline] } 00:30:10.120 [Pipeline] // dir 00:30:10.126 [Pipeline] } 00:30:10.140 [Pipeline] // wrap 00:30:10.146 [Pipeline] } 00:30:10.158 [Pipeline] // catchError 00:30:10.168 [Pipeline] stage 00:30:10.170 [Pipeline] { (Epilogue) 00:30:10.184 [Pipeline] sh 00:30:10.472 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:17.083 [Pipeline] catchError 00:30:17.085 [Pipeline] { 00:30:17.098 [Pipeline] sh 00:30:17.438 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:17.438 Artifacts sizes are good 00:30:17.446 [Pipeline] } 00:30:17.461 [Pipeline] // catchError 00:30:17.475 [Pipeline] archiveArtifacts 00:30:17.483 Archiving artifacts 00:30:17.572 [Pipeline] cleanWs 00:30:17.584 [WS-CLEANUP] Deleting project workspace... 00:30:17.584 [WS-CLEANUP] Deferred wipeout is used... 00:30:17.591 [WS-CLEANUP] done 00:30:17.593 [Pipeline] } 00:30:17.609 [Pipeline] // stage 00:30:17.614 [Pipeline] } 00:30:17.628 [Pipeline] // node 00:30:17.634 [Pipeline] End of Pipeline 00:30:17.675 Finished: SUCCESS